Advances in Statistical Approaches Oncology Drug Development
Ivanova, Anastasia; Rosner, Gary L.; Marchenko, Olga; Parke, Tom; Perevozskaya, Inna; Wang, Yanping
2014-01-01
We describe some recent developments in statistical methodology and practice in oncology drug development from an academic and an industry perspective. Many adaptive designs were pioneered in oncology, and oncology is still at the forefront of novel methods to enable better and faster Go/No-Go decision making while controlling the cost. PMID:25949927
Intermediate/Advanced Research Design and Statistics
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2009-01-01
The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs
Advanced Placement Course Description. Statistics.
ERIC Educational Resources Information Center
College Entrance Examination Board, New York, NY.
The Advanced Placement (AP) program is a cooperative educational effort of secondary schools, colleges, and the College Board that consists of 30 college-level courses and examinations in 17 academic disciplines for highly motivated students in secondary schools. AP courses are offered in more than 11,000 high schools and are recognized by nearly…
Statistical Approach to Protein Quantification*
Gerster, Sarah; Kwon, Taejoon; Ludwig, Christina; Matondo, Mariette; Vogel, Christine; Marcotte, Edward M.; Aebersold, Ruedi; Bühlmann, Peter
2014-01-01
A major goal in proteomics is the comprehensive and accurate description of a proteome. This task includes not only the identification of proteins in a sample, but also the accurate quantification of their abundance. Although mass spectrometry typically provides information on peptide identity and abundance in a sample, it does not directly measure the concentration of the corresponding proteins. Specifically, most mass-spectrometry-based approaches (e.g. shotgun proteomics or selected reaction monitoring) allow one to quantify peptides using chromatographic peak intensities or spectral counting information. Ultimately, based on these measurements, one wants to infer the concentrations of the corresponding proteins. Inferring properties of the proteins based on experimental peptide evidence is often a complex problem because of the ambiguity of peptide assignments and different chemical properties of the peptides that affect the observed concentrations. We present SCAMPI, a novel generic and statistically sound framework for computing protein abundance scores based on quantified peptides. In contrast to most previous approaches, our model explicitly includes information from shared peptides to improve protein quantitation, especially in eukaryotes with many homologous sequences. The model accounts for uncertainty in the input data, leading to statistical prediction intervals for the protein scores. Furthermore, peptides with extreme abundances can be reassessed and classified as either regular data points or actual outliers. We used the proposed model with several datasets and compared its performance to that of other, previously used approaches for protein quantification in bottom-up mass spectrometry. PMID:24255132
Advanced Statistical Properties of Dispersing Billiards
NASA Astrophysics Data System (ADS)
Chernov, N.
2006-03-01
A new approach to statistical properties of hyperbolic dynamical systems emerged recently; it was introduced by L.-S. Young and modified by D. Dolgopyat. It is based on coupling method borrowed from probability theory. We apply it here to one of the most physically interesting models—Sinai billiards. It allows us to derive a series of new results, as well as make significant improvements in the existing results. First we establish sharp bounds on correlations (including multiple correlations). Then we use our correlation bounds to obtain the central limit theorem (CLT), the almost sure invariance principle (ASIP), the law of iterated logarithms, and integral tests.
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Boltzmann's Approach to Statistical Mechanics
NASA Astrophysics Data System (ADS)
Goldstein, Sheldon
In the last quarter of the nineteenth century, Ludwig Boltzmann explained how irreversible macroscopic laws, in particular the second law of thermodynamics, originate in the time-reversible laws of microscopic physics. Boltzmann's analysis, the essence of which I shall review here, is basically correct. The most famous criticisms of Boltzmann's later work on the subject have little merit. Most twentieth century innovations - such as the identification of the state of a physical system with a probability distribution \\varrho on its phase space, of its thermodynamic entropy with the Gibbs entropy of \\varrho, and the invocation of the notions of ergodicity and mixing for the justification of the foundations of statistical mechanics - are thoroughly misguided.
Teaching Classical Statistical Mechanics: A Simulation Approach.
ERIC Educational Resources Information Center
Sauer, G.
1981-01-01
Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)
Conceptualizing a Framework for Advanced Placement Statistics Teaching Knowledge
ERIC Educational Resources Information Center
Haines, Brenna
2015-01-01
The purpose of this article is to sketch a conceptualization of a framework for Advanced Placement (AP) Statistics Teaching Knowledge. Recent research continues to problematize the lack of knowledge and preparation among secondary level statistics teachers. The College Board's AP Statistics course continues to grow and gain popularity, but is a…
Reconciling statistical and systems science approaches to public health.
Ip, Edward H; Rahmandad, Hazhir; Shoham, David A; Hammond, Ross; Huang, Terry T-K; Wang, Youfa; Mabry, Patricia L
2013-10-01
Although systems science has emerged as a set of innovative approaches to study complex phenomena, many topically focused researchers including clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which the researchers are accustomed. There also appears to be conflicts between complex systems approaches and traditional statistical methodologies, both in terms of their underlying strategies and the languages they use. We argue that the conflicts are resolvable, and the sooner the better for the field. In this article, we show how statistical and systems science approaches can be reconciled, and how together they can advance solutions to complex problems. We do this by comparing the methods within a theoretical framework based on the work of population biologist Richard Levins. We present different types of models as representing different tradeoffs among the four desiderata of generality, realism, fit, and precision. PMID:24084395
Advance Report of Final Mortality Statistics, 1985.
ERIC Educational Resources Information Center
Monthly Vital Statistics Report, 1987
1987-01-01
This document presents mortality statistics for 1985 for the entire United States. Data analysis and discussion of these factors is included: death and death rates; death rates by age, sex, and race; expectation of life at birth and at specified ages; causes of death; infant mortality; and maternal mortality. Highlights reported include: (1) the…
Statistical approach to nuclear level density
Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.
2014-10-15
We discuss the level density in a finite many-body system with strong interaction between the constituents. Our primary object of applications is the atomic nucleus but the same techniques can be applied to other mesoscopic systems. We calculate and compare nuclear level densities for given quantum numbers obtained by different methods, such as nuclear shell model (the most successful microscopic approach), our main instrument - moments method (statistical approach), and Fermi-gas model; the calculation with the moments method can use any shell-model Hamiltonian excluding the spurious states of the center-of-mass motion. Our goal is to investigate statistical properties of nuclear level density, define its phenomenological parameters, and offer an affordable and reliable way of calculation.
Pooling Morphometric Estimates: A Statistical Equivalence Approach.
Pardoe, Heath R; Cutter, Gary R; Alter, Rachel; Hiess, Rebecca Kucharsky; Semmelroch, Mira; Parker, Donna; Farquharson, Shawna; Jackson, Graeme D; Kuzniecky, Ruben
2016-01-01
Changes in hardware or image-processing settings are a common issue for large multicenter studies. To pool MRI data acquired under these changed conditions, it is necessary to demonstrate that the changes do not affect MRI-based measurements. In these circumstances, classical inference testing is inappropriate because it is designed to detect differences, not prove similarity. We used a method known as statistical equivalence testing to address this limitation. Equivalence testing was carried out on 3 datasets: (1) cortical thickness and automated hippocampal volume estimates obtained from healthy individuals imaged using different multichannel head coils; (2) manual hippocampal volumetry obtained using two readers; and (3) corpus callosum area estimates obtained using an automated method with manual cleanup carried out by two readers. Equivalence testing was carried out using the "two one-sided tests" (TOST) approach. Power analyses of the TOST were used to estimate sample sizes required for well-powered equivalence testing analyses. Mean and standard deviation estimates from the automated hippocampal volume dataset were used to carry out an example power analysis. Cortical thickness values were found to be equivalent over 61% of the cortex when different head coils were used (q < .05, false discovery rate correction). Automated hippocampal volume estimates obtained using the same two coils were statistically equivalent (TOST P = 4.28 × 10(-15) ). Manual hippocampal volume estimates obtained using two readers were not statistically equivalent (TOST P = .97). The use of different readers to carry out limited correction of automated corpus callosum segmentations yielded equivalent area estimates (TOST P = 1.28 × 10(-14) ). Power analysis of simulated and automated hippocampal volume data demonstrated that the equivalence margin affects the number of subjects required for well-powered equivalence tests. We have presented a statistical method for determining if
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
Phase statistics approach to human ventricular fibrillation
NASA Astrophysics Data System (ADS)
Wu, Ming-Chya; Watanabe, Eiichi; Struzik, Zbigniew R.; Hu, Chin-Kun; Yamamoto, Yoshiharu
2009-11-01
Ventricular fibrillation (VF) is known to be the most dangerous cardiac arrhythmia, frequently leading to sudden cardiac death (SCD). During VF, cardiac output drops to nil and, unless the fibrillation is promptly halted, death usually ensues within minutes. While delivering life saving electrical shocks is a method of preventing SCD, it has been recognized that some, though not many, VF episodes are self-terminating, and understanding the mechanism of spontaneous defibrillation might provide newer therapeutic options for treatment of this otherwise fatal arrhythmia. Using the phase statistics approach, recently developed to study financial and physiological time series, here, we reveal the timing characteristics of transient features of ventricular tachyarrhythmia (mostly VF) electrocardiogram (ECG) and find that there are three distinct types of probability density function (PDF) of phase distributions: uniform (UF), concave (CC), and convex (CV). Our data show that VF patients with UF or CC types of PDF have approximately the same probability of survival and nonsurvival, while VF patients with CV type PDF have zero probability of survival, implying that their VF episodes are never self-terminating. Our results suggest that detailed phase statistics of human ECG data may be a key to understanding the mechanism of spontaneous defibrillation of fatal VF.
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Robot trajectories comparison: a statistical approach.
Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners. PMID:25525618
Teaching Statistics by Spreadsheet: A Developmental Approach.
ERIC Educational Resources Information Center
Ostrowski, John W.
1988-01-01
Presents a framework for using spreadsheet software (Lotus 1 2 3) on a microcomputer to develop statistical procedure templates for teaching statistical concepts. Provides an overview of traditional computer-based statistical applications, an outline for teaching-oriented statistical applications with illustrations, and suggestions for integrating…
Intelligence and embodiment: a statistical mechanics approach.
Chinea, Alejandro; Korutcheva, Elka
2013-04-01
Evolutionary neuroscience has been mainly dominated by the principle of phylogenetic conservation, specifically, by the search for similarities in brain organization. This principle states that closely related species tend to be similar because they have a common ancestor. However, explaining, for instance, behavioral differences between humans and chimpanzees, has been revealed to be notoriously difficult. In this paper, the hypothesis of a common information-processing principle exploited by the brains evolved through natural evolution is explored. A model combining recent advances in cognitive psychology and evolutionary neuroscience is presented. The macroscopic effects associated with the intelligence-like structures postulated by the model are analyzed from a statistical mechanics point of view. As a result of this analysis, some plausible explanations are put forward concerning the disparities and similarities in cognitive capacities which are observed in nature across species. Furthermore, an interpretation on the efficiency of brain's computations is also provided. These theoretical results and their implications against modern theories of intelligence are shown to be consistent with the formulated hypothesis. PMID:23454920
Statistical approach to partial equilibrium analysis
NASA Astrophysics Data System (ADS)
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
Uncertainty quantification approaches for advanced reactor analyses.
Briggs, L. L.; Nuclear Engineering Division
2009-03-24
The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.
ERIC Educational Resources Information Center
McGrath, April L.; Ferns, Alyssa; Greiner, Leigh; Wanamaker, Kayla; Brown, Shelley
2015-01-01
In this study we assessed the usefulness of a multifaceted teaching framework in an advanced statistics course. We sought to expand on past findings by using this framework to assess changes in anxiety and self-efficacy, and we collected focus group data to ascertain whether students attribute such changes to a multifaceted teaching approach.…
Project T.E.A.M. (Technical Education Advancement Modules). Advanced Statistical Process Control.
ERIC Educational Resources Information Center
Dunlap, Dale
This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 20-hour advanced statistical process control (SPC) and quality improvement course designed to develop the following competencies: (1) understanding quality systems; (2) knowing the process; (3) solving quality problems; and (4)…
Tools for the advancement of undergraduate statistics education
NASA Astrophysics Data System (ADS)
Schaffner, Andrew Alan
To keep pace with advances in applied statistics and to maintain literate consumers of quantitative analyses, statistics educators stress the need for change in the classroom (Cobb, 1992; Garfield, 1993, 1995; Moore, 1991a; Snee, 1993; Steinhorst and Keeler, 1995). These authors stress a more concept oriented undergraduate introductory statistics course which emphasizes true understanding over mechanical skills. Drawing on recent educational research, this dissertation attempts to realize this vision by developing tools and pedagogy to assist statistics instructors. This dissertation describes statistical facets, pieces of statistical understanding that are building blocks of knowledge, and discusses DIANA, a World-Wide Web tool for diagnosing facets. Further, I show how facets may be incorporated into course design through the development of benchmark lessons based on the principles of collaborative learning (diSessa and Minstrell, 1995; Cohen, 1994; Reynolds et al., 1995; Bruer, 1993; von Glasersfeld, 1991) and activity based courses (Jones, 1991; Yackel, Cobb and Wood, 1991). To support benchmark lessons and collaborative learning in large classes I describe Virtual Benchmark Instruction, benchmark lessons which take place on a structured hypertext bulletin board using the technology of the World-Wide Web. Finally, I present randomized experiments which suggest that these educational developments are effective in a university introductory statistics course.
Optimal Statistical Approach to Optoacoustic Image Reconstruction
NASA Astrophysics Data System (ADS)
Zhulina, Yulia V.
2000-11-01
An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.
A Statistical Approach to Automatic Speech Summarization
NASA Astrophysics Data System (ADS)
Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex
2003-12-01
This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.
Statistical physics approaches to understanding physiological signals
NASA Astrophysics Data System (ADS)
Chen, Zhi
This thesis applies novel statistical physics approaches to investigate complex mechanisms underlying some physiological signals related to human motor activity and stroke. The scale-invariant properties of motor activity fluctuations and the phase coupling between blood flow (BF) in the brain and blood pressure (BP) at the finger are studied. Both BF and BP signals are controlled by cerebral autoregulation, the impairment of which is relevant to stroke. Part I of this thesis introduces experimental methods of assessing human activity fluctuations, BF and BP signals. These signals are often nonstationary, i.e., the mean and the standard deviation of signals are not invariant under time shifts. This fact imposes challenges in correctly analyzing properties of such signals. A review of conventional methods and the methods from statistical physics in quantifying long-range power-law correlations (an important scale-invariant property) and phase coupling in nonstationary signals is provided. Part II investigates the effects of trends, nonstationarities and applying certain nonlinear filters on the scale-invariant properties of signals. Nonlinear logarithmic filters are shown to change correlation properties of anti-correlated signals and strongly positively-correlated signals. It is also shown that different types of trends may change correlation properties and thus mask true correlations in the original signal. A "superposition rule" is established to quantitatively describe the relationship among correlation properties of any two signals and the sum of these two signals. Based on this rule, simulations are conducted to show how to distinguish the correlations due to trends and nonstationaries from the true correlations in the real world signals. Part III investigates dynamics of human activity fluctuations. Results suggest that apparently random forearm motion possesses previously unrecognized dynamic patterns characterized by common distribution forms, scale
Statistical physics approaches to Alzheimer's disease
NASA Astrophysics Data System (ADS)
Peng, Shouyong
Alzheimer's disease (AD) is the most common cause of late life dementia. In the brain of an AD patient, neurons are lost and spatial neuronal organizations (microcolumns) are disrupted. An adequate quantitative analysis of microcolumns requires that we automate the neuron recognition stage in the analysis of microscopic images of human brain tissue. We propose a recognition method based on statistical physics. Specifically, Monte Carlo simulations of an inhomogeneous Potts model are applied for image segmentation. Unlike most traditional methods, this method improves the recognition of overlapped neurons, and thus improves the overall recognition percentage. Although the exact causes of AD are unknown, as experimental advances have revealed the molecular origin of AD, they have continued to support the amyloid cascade hypothesis, which states that early stages of aggregation of amyloid beta (Abeta) peptides lead to neurodegeneration and death. X-ray diffraction studies reveal the common cross-beta structural features of the final stable aggregates-amyloid fibrils. Solid-state NMR studies also reveal structural features for some well-ordered fibrils. But currently there is no feasible experimental technique that can reveal the exact structure or the precise dynamics of assembly and thus help us understand the aggregation mechanism. Computer simulation offers a way to understand the aggregation mechanism on the molecular level. Because traditional all-atom continuous molecular dynamics simulations are not fast enough to investigate the whole aggregation process, we apply coarse-grained models and discrete molecular dynamics methods to increase the simulation speed. First we use a coarse-grained two-bead (two beads per amino acid) model. Simulations show that peptides can aggregate into multilayer beta-sheet structures, which agree with X-ray diffraction experiments. To better represent the secondary structure transition happening during aggregation, we refine the
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the
A statistical approach to root system classification
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
A statistical approach to root system classification.
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
A statistical mechanics approach to Granovetter theory
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2012-05-01
In this paper we try to bridge breakthroughs in quantitative sociology/econometrics, pioneered during the last decades by Mac Fadden, Brock-Durlauf, Granovetter and Watts-Strogatz, by introducing a minimal model able to reproduce essentially all the features of social behavior highlighted by these authors. Our model relies on a pairwise Hamiltonian for decision-maker interactions which naturally extends the multi-populations approaches by shifting and biasing the pattern definitions of a Hopfield model of neural networks. Once introduced, the model is investigated through graph theory (to recover Granovetter and Watts-Strogatz results) and statistical mechanics (to recover Mac-Fadden and Brock-Durlauf results). Due to the internal symmetries of our model, the latter is obtained as the relaxation of a proper Markov process, allowing even to study its out-of-equilibrium properties. The method used to solve its equilibrium is an adaptation of the Hamilton-Jacobi technique recently introduced by Guerra in the spin-glass scenario and the picture obtained is the following: shifting the patterns from [-1,+1]→[0.+1] implies that the larger the amount of similarities among decision makers, the stronger their relative influence, and this is enough to explain both the different role of strong and weak ties in the social network as well as its small-world properties. As a result, imitative interaction strengths seem essentially a robust request (enough to break the gauge symmetry in the couplings), furthermore, this naturally leads to a discrete choice modelization when dealing with the external influences and to imitative behavior à la Curie-Weiss as the one introduced by Brock and Durlauf.
Statistical physics approaches to financial fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong
2009-12-01
Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation
An Active Learning Approach to Teaching Statistics.
ERIC Educational Resources Information Center
Dolinsky, Beverly
2001-01-01
Provides suggestions for using active learning as the primary means to teaching statistics in order to create a collaborative environment. Addresses such strategies as using SPSS Base 7.5 for Windows and course periods centered on answering student-generated questions. Discusses various writing intensive assignments. (CMK)
Unified statistical approach to cortical thickness analysis.
Chung, Moo K; Robbins, Steve; Evans, Alan C
2005-01-01
This paper presents a unified image processing and analysis framework for cortical thickness in characterizing a clinical population. The emphasis is placed on the development of data smoothing and analysis framework. The human brain cortex is a highly convoluted surface. Due to the convoluted non-Euclidean surface geometry, data smoothing and analysis on the cortex are inherently difficult. When measurements lie on a curved surface, it is natural to assign kernel smoothing weights based on the geodesic distance along the surface rather than the Euclidean distance. We present a new data smoothing framework that address this problem implicitly without actually computing the geodesic distance and present its statistical properties. Afterwards, the statistical inference is based on the random field theory based multiple comparison correction. As an illustration, we have applied the method in detecting the regions of abnormal cortical thickness in 16 high functioning autistic children. PMID:17354731
A statistical approach for polarized parton distributions
NASA Astrophysics Data System (ADS)
Bourrely, C.; Soffer, J.; Buccella, F.
2002-04-01
A global next-to-leading order QCD analysis of unpolarized and polarized deep-inelastic scattering data is performed with parton distributions constructed in a statistical physical picture of the nucleon. The chiral properties of QCD lead to strong relations between quarks and antiquarks distributions and the importance of the Pauli exclusion principle is also emphasized. We obtain a good description, in a broad range of x and Q^2, of all measured structure functions in terms of very few free parameters. We stress the fact that at RHIC-BNL the ratio of the unpolarized cross sections for the production of W^+ and W^- in pp collisions will directly probe the behavior of the bar d(x) / bar u(x) ratio for x ≥ 0.2, a definite and important test for the statistical model. Finally, we give specific predictions for various helicity asymmetries for the W^±, Z production in pp collisions at high energies, which will be measured with forthcoming experiments at RHIC-BNL and which are sensitive tests of the statistical model for Δ bar u(x) and Δ bar d(x).
Supersymmetric Liouville theory: A statistical mechanical approach
Barrozo, M.C.; Belvedere, L.V.
1996-02-01
The statistical mechanical system associated with the two-dimensional supersymmetric Liouville theory is obtained through an infrared-finite perturbation expansion. Considering the system confined in a finite volume and in the presence of a uniform neutralizing background, we show that the grand-partition function of this system describes a one-component gas, in which the Boltzmann factor is weighted by an integration over the Grassmann variables. This weight function introduces the dimensional reduction phenomenon. After performing the thermodynamic limit, the resulting supersymmetric quantum theory is translationally invariant. {copyright} {ital 1996 The American Physical Society.}
Random graph coloring: statistical physics approach.
van Mourik, J; Saad, D
2002-11-01
The problem of vertex coloring in random graphs is studied using methods of statistical physics and probability. Our analytical results are compared to those obtained by exact enumeration and Monte Carlo simulations. We critically discuss the merits and shortcomings of the various methods, and interpret the results obtained. We present an exact analytical expression for the two-coloring problem as well as general replica symmetric approximated solutions for the thermodynamics of the graph coloring problem with p colors and K-body edges. PMID:12513569
Statistical mechanical approach to human language
NASA Astrophysics Data System (ADS)
Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos
2006-07-01
We use the formulation of equilibrium statistical mechanics in order to study some important characteristics of language. Using a simple expression for the Hamiltonian of a language system, which is directly implied by the Zipf law, we are able to explain several characteristic features of human language that seem completely unrelated, such as the universality of the Zipf exponent, the vocabulary size of children, the reduced communication abilities of people suffering from schizophrenia, etc. While several explanations are necessarily only qualitative at this stage, we have, nevertheless, been able to derive a formula for the vocabulary size of children as a function of age, which agrees rather well with experimental data.
A Hierarchical Statistic Methodology for Advanced Memory System Evaluation
Sun, X.-J.; He, D.; Cameron, K.W.; Luo, Y.
1999-04-12
Advances in technology have resulted in a widening of the gap between computing speed and memory access time. Data access time has become increasingly important for computer system design. Various hierarchical memory architectures have been developed. The performance of these advanced memory systems, however, varies with applications and problem sizes. How to reach an optimal cost/performance design eludes researchers still. In this study, the authors introduce an evaluation methodology for advanced memory systems. This methodology is based on statistical factorial analysis and performance scalability analysis. It is two fold: it first determines the impact of memory systems and application programs toward overall performance; it also identifies the bottleneck in a memory hierarchy and provides cost/performance comparisons via scalability analysis. Different memory systems can be compared in terms of mean performance or scalability over a range of codes and problem sizes. Experimental testing has been performed extensively on the Department of Energy's Accelerated Strategic Computing Initiative (ASCI) machines and benchmarks available at the Los Alamos National Laboratory to validate this newly proposed methodology. Experimental and analytical results show this methodology is simple and effective. It is a practical tool for memory system evaluation and design. Its extension to general architectural evaluation and parallel computer systems are possible and should be further explored.
A Statistical Approach for Ambiguous Sequence Mappings
Technology Transfer Automated Retrieval System (TEKTRAN)
When attempting to map RNA sequences to a reference genome, high percentages of short sequence reads are often assigned to multiple genomic locations. One approach to handling these “ambiguous mappings” has been to discard them. This results in a loss of data, which can sometimes be as much as 45% o...
Statistical Physics Approaches to RNA Editing
NASA Astrophysics Data System (ADS)
Bundschuh, Ralf
2012-02-01
The central dogma of molecular Biology states that DNA is transcribed base by base into RNA which is in turn translated into proteins. However, some organisms edit their RNA before translation by inserting, deleting, or substituting individual or short stretches of bases. In many instances the mechanisms by which an organism recognizes the positions at which to edit or by which it performs the actual editing are unknown. One model system that stands out by its very high rate of on average one out of 25 bases being edited are the Myxomycetes, a class of slime molds. In this talk we will show how the computational methods and concepts from statistical Physics can be used to analyze DNA and protein sequence data to predict editing sites in these slime molds and to guide experiments that identified previously unknown types of editing as well as the complete set of editing events in the slime mold Physarum polycephalum.
Advanced Safeguards Approaches for New Reprocessing Facilities
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Richard; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-06-24
U.S. efforts to promote the international expansion of nuclear energy through the Global Nuclear Energy Partnership (GNEP) will result in a dramatic expansion of nuclear fuel cycle facilities in the United States. New demonstration facilities, such as the Advanced Fuel Cycle Facility (AFCF), the Advanced Burner Reactor (ABR), and the Consolidated Fuel Treatment Center (CFTC) will use advanced nuclear and chemical process technologies that must incorporate increased proliferation resistance to enhance nuclear safeguards. The ASA-100 Project, “Advanced Safeguards Approaches for New Nuclear Fuel Cycle Facilities,” commissioned by the NA-243 Office of NNSA, has been tasked with reviewing and developing advanced safeguards approaches for these demonstration facilities. Because one goal of GNEP is developing and sharing proliferation-resistant nuclear technology and services with partner nations, the safeguards approaches considered are consistent with international safeguards as currently implemented by the International Atomic Energy Agency (IAEA). This first report reviews possible safeguards approaches for the new fuel reprocessing processes to be deployed at the AFCF and CFTC facilities. Similar analyses addressing the ABR and transuranic (TRU) fuel fabrication lines at AFCF and CFTC will be presented in subsequent reports.
An Integrated, Statistical Molecular Approach to the Physical Chemistry Curriculum
ERIC Educational Resources Information Center
Cartier, Stephen F.
2009-01-01
As an alternative to the "thermodynamics first" or "quantum first" approaches to the physical chemistry curriculum, the statistical definition of entropy and the Boltzmann distribution are introduced in the first days of the course and the entire two-semester curriculum is then developed from these concepts. Once the tools of statistical mechanics…
Statistical Approach To Determination Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1993-01-01
Paper presents statistical approach to analysis of texture in synthetic-aperture-radar (SAR) images. Objective: to extract intrinsic spatial variability of distributed target from overall spatial variability of SAR image.
Chemical Approaches for Advanced Optical Imaging
NASA Astrophysics Data System (ADS)
Chen, Zhixing
Advances in optical microscopy have been constantly expanding our knowledge of biological systems. The achievements therein are a result of close collaborations between physicists/engineers who build the imaging instruments and chemists/biochemists who design the corresponding probe molecules. In this work I present a number of chemical approaches for the development of advanced optical imaging methods. Chapter 1 provides an overview of the recent advances of novel imaging approaches taking advantage of chemical tag technologies. Chapter 2 describes the second-generation covalent trimethoprim-tag as a viable tool for live cell protein-specific labeling and imaging. In Chapter 3 we present a fluorescence lifetime imaging approach to map protein-specific micro-environment in live cells using TMP-Cy3 as a chemical probe. In Chapter 4, we present a method harnessing photo-activatable fluorophores to extend the fundamental depth limit in multi-photon microscopy. Chapter 5 describes the development of isotopically edited alkyne palette for multi-color live cell vibrational imaging of cellular small molecules. These studies exemplify the impact of modern chemical approaches in the development of advanced optical microscopies.
An approach to dyspnea in advanced disease.
Gallagher, Romayne
2003-01-01
INTRODUCTION: To describe an approach to assessment and treatment of dyspnea. SOURCES OF INFORMATION: New level I evidence can guide management of dyspnea in advanced illness. Assessment and use of adjuvant medications and oxygen relies on level II and III evidence. MAIN MESSAGE: Opioids are first-line therapy for managing dyspnea in advanced illness. They are safe and effective in reducing shortness of breath. Neuroleptics are useful adjuvant medications. Evidence does not support use of oxygen for every patient experiencing dyspnea; it should be tried for patients who do not benefit from first-line medications and nonmedicinal therapies. CONCLUSION: Opioids relieve dyspnea and are indicated as first-line treatment for dyspnea arising from advanced disease of any cause. PMID:14708926
NASA Astrophysics Data System (ADS)
Mountcastle, Donald B.; Bucy, Brandon R.; Thompson, John R.
2007-11-01
Equilibrium properties of macroscopic systems are highly predictable as n, the number of particles approaches and exceeds Avogadro's number; theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity (ω) (where S = k ln(ω)) include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our statistical mechanics students usually gave reasonable answers about the probabilities, but not the relative uncertainties, of the predicted outcomes of such events. However, they reliably predicted that the uncertainty in a measured continuous quantity (e.g., the amount of rainfall) does decrease as the number of measurements increases. Typical textbook presentations assume that students understand that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. This is at odds with our findings, even though most of our students had previously completed mathematics courses in statistics, as well as an advanced electronics laboratory course that included statistical analysis of distributions of dart scores as n increased.
Automated statistical approach to Langley evaluation for a solar radiometer.
Kuester, Michele A; Thome, Kurtis J; Reagan, John A
2003-08-20
We present a statistical approach to Langley evaluation (SALE) leading to an improved method of calibration of an automated solar radiometer. Software was developed with the SALE method to first determine whether a day is a good calibration day and then to automatically calculate an intercept value for the solar radiometer. Results from manual processing of calibration data sets agree with those of the automated method to within the errors of each approach. PMID:12952339
Automated statistical approach to Langley evaluation for a solar radiometer
NASA Astrophysics Data System (ADS)
Kuester, Michele A.; Thome, Kurtis J.; Reagan, John A.
2003-08-01
We present a statistical approach to Langley evaluation (SALE) leading to an improved method of calibration of an automated solar radiometer. Software was developed with the SALE method to first determine whether a day is a good calibration day and then to automatically calculate an intercept value for the solar radiometer. Results from manual processing of calibration data sets agree with those of the automated method to within the errors of each approach.
Advances in Testing the Statistical Significance of Mediation Effects
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.
2006-01-01
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…
Reconciling Statistical and Systems Science Approaches to Public Health
ERIC Educational Resources Information Center
Ip, Edward H.; Rahmandad, Hazhir; Shoham, David A.; Hammond, Ross; Huang, Terry T. -K.; Wang, Youfa; Mabry, Patricia L.
2013-01-01
Although systems science has emerged as a set of innovative approaches to study complex phenomena, many topically focused researchers including clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which…
A Standardization Approach to Adjusting Pretest Item Statistics.
ERIC Educational Resources Information Center
Chang, Shun-Wen; Hanson, Bradley A.; Harris, Deborah J.
This study presents and evaluates a method of standardization that may be used by test practitioners to standardize classical item statistics when sample sizes are small. The effectiveness of this standardization approach was compared through simulation with the one-parameter logistic (1PL) and three parameter logistic (3PL) models based on the…
New Results in the Quantum Statistical Approach to Parton Distributions
NASA Astrophysics Data System (ADS)
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-02-01
We will describe the quantum statistical approach to parton distributions allowing to obtain simultaneously the unpolarized distributions and the helicity distributions. We will present some recent results, in particular related to the nucleon spin structure in QCD. Future measurements are challenging to check the validity of this novel physical framework.
A κ-generalized statistical mechanics approach to income analysis
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
A Statistical Approach to Autocorrelation Detection of Low Frequency Earthquakes
NASA Astrophysics Data System (ADS)
Aguiar, A. C.; Beroza, G. C.
2012-12-01
We have analyzed tremor data during the April, 2006 tremor episode in the Nankai Trough in SW Japan using the auto-correlation approach of Brown et al. (2008), which detects low frequency earthquakes (LFEs) based on pair-wise matching. We have found that the statistical behavior of the autocorrelations of each station is different and for this reason we have based our LFE detection method on the autocorrelation of each station individually. Analyzing one station at a time assures that the detection threshold will only depend on the station being analyzed. Once detections are found on each station individually, using a low detection threshold based on a Gaussian distribution of the correlation coefficients, the results are compared within stations and declared a detection if they are found in a statistically significant number of the stations, following multinomial statistics. We have compared our detections using the single station method to the detections found by Shelly et al. (2007) for the 2006 April 16 events and find a significant number of similar detections as well as many new detections that were not found using templates from known LFEs. We are working towards developing a sound statistical basis for event detection. This approach should improve our ability to detect LFEs within weak tremor signals where they are not already identified, and should be applicable to earthquake swarms and sequences in general.
Statistics of topography : multifractal approach to describe planetary topography
NASA Astrophysics Data System (ADS)
Landais, Francois; Schmidt, Frédéric; Lovejoy, Shaun
2016-04-01
In the last decades, a huge amount of topographic data has been obtained by several techniques (laser and radar altimetry, DTM…) for different bodies in the solar system. In each case, topographic fields exhibit an extremely high variability with details at each scale, from millimeters to thousands of kilometers. In our study, we investigate the statistical properties of the topography. Our statistical approach is motivated by the well known scaling behavior of topography that has been widely studied in the past. Indeed, scaling laws are strongly present in geophysical field and can be studied using fractal formalism. More precisely, we expect multifractal behavior in global topographic fields. This behavior reflects the high variability and intermittency observed in topographic fields that can not be generated by simple scaling models. In the multifractal formalism, each statistical moment exhibits a different scaling law characterized by a function called the moment scaling function. Previous studies were conducted at regional scale to demonstrate that topography present multifractal statistics (Gagnon et al., 2006, NPG). We have obtained similar results on Mars (Landais et al. 2015) and more recently on different body in the the solar system including the Moon, Venus and Mercury. We present the result of different multifractal approaches performed on global and regional basis and compare the fractal parameters from a body to another.
Statistical Approach to Quality Control of Large Thermodynamic Databases
NASA Astrophysics Data System (ADS)
Nyman, Henrik; Talonen, Tarja; Roine, Antti; Hupa, Mikko; Corander, Jukka
2012-10-01
In chemistry and engineering, thermodynamic databases are widely used to obtain the basic properties of pure substances or mixtures. Large and reliable databases are the basis of all thermodynamic modeling of complex chemical processes or systems. However, the effort needed in the establishment, maintenance, and management of a database increases exponentially along with the size and scope of the database. Therefore, we developed a statistical modeling approach to assist an expert in the evaluation and management process, which can pinpoint various types of erroneous records in a database. We have applied this method to investigate the enthalpy, entropy, and heat capacity characteristics in a large commercial database for approximately 25,000 chemical species. Our highly successful results show that a statistical approach is a valuable tool (1) for the management of such databases and (2) to create enthalpy, entropy and heat capacity estimates for such species in which thermochemical data are not available.
Primordial statistical anisotropies: the effective field theory approach
NASA Astrophysics Data System (ADS)
Akbar Abolhasani, Ali; Akhshik, Mohammad; Emami, Razieh; Firouzjahi, Hassan
2016-03-01
In this work we present the effective field theory of primordial statistical anisotropies generated during anisotropic inflation involving a background U(1) gauge field. Besides the usual Goldstone boson associated with the breaking of time diffeomorphism we have two additional Goldstone bosons associated with the breaking of spatial diffeomorphisms. We further identify these two new Goldstone bosons with the expected two transverse degrees of the U(1) gauge field fluctuations. Upon defining the appropriate unitary gauge, we present the most general quadratic action which respects the remnant symmetry in the unitary gauge. The interactions between various Goldstone bosons leads to statistical anisotropy in curvature perturbation power spectrum. Calculating the general results for power spectrum anisotropy, we recover the previously known results in specific models of anisotropic inflation. In addition, we present novel results for statistical anisotropy in models with non-trivial sound speed for inflaton fluctuations. Also we identify the interaction which leads to birefringence-like effects in anisotropic power spectrum in which the speed of gauge field fluctuations depends on the direction of the mode propagation and the two polarization of gauge field fluctuations contribute differently in statistical anisotropy. As another interesting application, our EFT approach naturally captures interactions generating parity violating statistical anisotropies.
A new statistical approach to climate change detection and attribution
NASA Astrophysics Data System (ADS)
Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe
2016-04-01
We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the "models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).
New Statistical Approaches to RHESSI Solar Flare Imaging
NASA Astrophysics Data System (ADS)
Schwartz, Richard A.; Benvenuto, F.; Massone, A.; Piana, M.; Sorrentino, A.
2012-05-01
We present two statistical approaches to image reconstruction from RHESSI measurements. The first approach implements maximum likelihood by means of an expectation-maximization algorithm resembling the Lucy-Richardson method. The second approach is genuinely Bayesian in the fact that it introduces the use of a prior probability distribution coding information known a priori on the flaring source. The posterior distribution is computed by means of an importance sampling Monte Carlo technique. Further, this approach will be extended to a filtering method in which the posterior distribution at a specific energy or time interval is used as a prior for the next interval. Finally, we will also study the possibility to adapt this method to multi-scaling reconstruction exploiting the different resolution powers provided by the nine RHESSI collimators.
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
Advanced Approach of Multiagent Based Buoy Communication.
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
Defining statistical perceptions with an empirical Bayesian approach
NASA Astrophysics Data System (ADS)
Tajima, Satohiro
2013-04-01
Extracting statistical structures (including textures or contrasts) from a natural stimulus is a central challenge in both biological and engineering contexts. This study interprets the process of statistical recognition in terms of hyperparameter estimations and free-energy minimization procedures with an empirical Bayesian approach. This mathematical interpretation resulted in a framework for relating physiological insights in animal sensory systems to the functional properties of recognizing stimulus statistics. We applied the present theoretical framework to two typical models of natural images that are encoded by a population of simulated retinal neurons, and demonstrated that the resulting cognitive performances could be quantified with the Fisher information measure. The current enterprise yielded predictions about the properties of human texture perception, suggesting that the perceptual resolution of image statistics depends on visual field angles, internal noise, and neuronal information processing pathways, such as the magnocellular, parvocellular, and koniocellular systems. Furthermore, the two conceptually similar natural-image models were found to yield qualitatively different predictions, striking a note of warning against confusing the two models when describing a natural image.
Advances in assessing geomorphic plausibility in statistical susceptibility modelling
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2014-05-01
The quality, reliability and applicability of landslide susceptibility maps is regularly deduced directly by interpreting quantitative model performance measures. These quantitative estimates are usually calculated for an independent test sample of a landslide inventory. Numerous studies demonstrate that totally unbiased landslide inventories are rarely available. We assume that such biases are also inherent in the test sample used to quantitatively validate the models. Therefore we suppose that the explanatory power of statistical performance measures is limited by the quality of the inventory used to calculate these statistics. To investigate this assumption, we generated and validated 16 statistical susceptibility models by using two landslide inventories of differing qualities for the Rhenodanubian Flysch zone of Lower Austria (1,354 km²). The ALS-based (Airborne Laser Scan) Inventory (n=6,218) was mapped purposely for susceptibility modelling from a high resolution hillshade and exhibits a high positional accuracy. The less accurate building ground register (BGR; n=681) provided by the Geological Survey of Lower Austria represents reported damaging events and shows a substantially lower completeness. Both inventories exhibit differing systematic biases regarding the land cover. For instance, due to human impact on the visibility of geomorphic structures (e.g. planation), few ALS landslides could be mapped on settlements and pastures (ALS-mapping bias). In contrast, damaging events were frequently reported for settlements and pastures (BGR-report bias). Susceptibility maps were calculated by applying four multivariate classification methods, namely generalized linear model, generalized additive model, random forest and support vector machine separately for both inventories and two sets of explanatory variables (with and without land cover). Quantitative validation was performed by calculating the area under the receiver operating characteristics curve (AUROC
A statistical approach to optimizing concrete mixture design.
Ahmad, Shamsad; Alghamdi, Saeid A
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Statistical Methods Handbook for Advanced Gas Reactor Fuel Materials
J. J. Einerson
2005-05-01
Fuel materials such as kernels, coated particles, and compacts are being manufactured for experiments simulating service in the next generation of high temperature gas reactors. These must meet predefined acceptance specifications. Many tests are performed for quality assurance, and many of these correspond to criteria that must be met with specified confidence, based on random samples. This report describes the statistical methods to be used. The properties of the tests are discussed, including the risk of false acceptance, the risk of false rejection, and the assumption of normality. Methods for calculating sample sizes are also described.
An advanced approach to reactivity rating.
Kossoy, A; Benin, A; Akhmetshin, Yu
2005-02-14
Reactive hazards remain a significant safety challenge in the chemical industry despite continual attention devoted to this problem. The application of various criteria, which are recommended by the guidelines for assessment of reactive hazards, often causes unsafe results to be obtained. The main origins of such failures are as follows: (a) reactivity of a compound is considered as an inherent property of a compound; (b) some appropriate criteria are determined by using too simple methods that cannot reveal potential hazards properly. Four well-known hazard indicators--time to certain conversion limit, TCL; adiabatic time to maximum rate, TMR; adiabatic temperature rise; and NFPA reactivity rating number, Nr--are analyzed in the paper. It was ascertained that they could be safely used for preliminary assessment of reactive hazards provided that: (a) the selected indicator is appropriate for the specific conditions of a process; (b) the indicators have been determined by using the pertinent methods. The applicability limits for every indicator were determined and the advanced kinetics-based simulation approach, which allows reliable determination of the indicators, is proposed. The technique of applying this approach is illustrated by two practical examples. PMID:15721524
Advanced Safeguards Approaches for New Fast Reactors
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Rick L.; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-12-15
This third report in the series reviews possible safeguards approaches for new fast reactors in general, and the ABR in particular. Fast-neutron spectrum reactors have been used since the early 1960s on an experimental and developmental level, generally with fertile blanket fuels to “breed” nuclear fuel such as plutonium. Whether the reactor is designed to breed plutonium, or transmute and “burn” actinides depends mainly on the design of the reactor neutron reflector and the whether the blanket fuel is “fertile” or suitable for transmutation. However, the safeguards issues are very similar, since they pertain mainly to the receipt, shipment and storage of fresh and spent plutonium and actinide-bearing “TRU”-fuel. For these reasons, the design of existing fast reactors and details concerning how they have been safeguarded were studied in developing advanced safeguards approaches for the new fast reactors. In this regard, the design of the Experimental Breeder Reactor-II “EBR-II” at the Idaho National Laboratory (INL) was of interest, because it was designed as a collocated fast reactor with a pyrometallurgical reprocessing and fuel fabrication line – a design option being considered for the ABR. Similarly, the design of the Fast Flux Facility (FFTF) on the Hanford Site was studied, because it was a successful prototype fast reactor that ran for two decades to evaluate fuels and the design for commercial-scale fast reactors.
A Flexible Approach for the Statistical Visualization of Ensemble Data
Potter, K.; Wilson, A.; Bremer, P.; Williams, Dean N.; Pascucci, V.; Johnson, C.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methods that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.
Statistically Based Approach to Broadband Liner Design and Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M. (Inventor); Jones, Michael G. (Inventor)
2016-01-01
A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.
Statistical Approaches for the Study of Cognitive and Brain Aging
Chen, Huaihou; Zhao, Bingxin; Cao, Guanqun; Proges, Eric C.; O'Shea, Andrew; Woods, Adam J.; Cohen, Ronald A.
2016-01-01
Neuroimaging studies of cognitive and brain aging often yield massive datasets that create many analytic and statistical challenges. In this paper, we discuss and address several limitations in the existing work. (1) Linear models are often used to model the age effects on neuroimaging markers, which may be inadequate in capturing the potential nonlinear age effects. (2) Marginal correlations are often used in brain network analysis, which are not efficient in characterizing a complex brain network. (3) Due to the challenge of high-dimensionality, only a small subset of the regional neuroimaging markers is considered in a prediction model, which could miss important regional markers. To overcome those obstacles, we introduce several advanced statistical methods for analyzing data from cognitive and brain aging studies. Specifically, we introduce semiparametric models for modeling age effects, graphical models for brain network analysis, and penalized regression methods for selecting the most important markers in predicting cognitive outcomes. We illustrate these methods using the healthy aging data from the Active Brain Study. PMID:27486400
STATISTICS OF DARK MATTER HALOS FROM THE EXCURSION SET APPROACH
Lapi, A.; Salucci, P.; Danese, L.
2013-08-01
We exploit the excursion set approach in integral formulation to derive novel, accurate analytic approximations of the unconditional and conditional first crossing distributions for random walks with uncorrelated steps and general shapes of the moving barrier; we find the corresponding approximations of the unconditional and conditional halo mass functions for cold dark matter (DM) power spectra to represent very well the outcomes of state-of-the-art cosmological N-body simulations. In addition, we apply these results to derive, and confront with simulations, other quantities of interest in halo statistics, including the rates of halo formation and creation, the average halo growth history, and the halo bias. Finally, we discuss how our approach and main results change when considering random walks with correlated instead of uncorrelated steps, and warm instead of cold DM power spectra.
The statistical multifragmentation model: Origins and recent advances
NASA Astrophysics Data System (ADS)
Donangelo, R.; Souza, S. R.
2016-07-01
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
An alternative approach to advancing resuscitation science.
Kern, Karl B; Valenzuela, Terence D; Clark, Lani L; Berg, Robert A; Hilwig, Ronald W; Berg, Marc D; Otto, Charles W; Newburn, Daniel; Ewy, Gordon A
2005-03-01
Stagnant survival rates in out-of-hospital cardiac arrest remain a great impetus for advancing resuscitation science. International resuscitation guidelines, with all their advantages for standardizing resuscitation therapeutic protocols, can be difficult to change. A formalized evidence-based process has been adopted by the International Liason Committee on Resuscitation (ILCOR) in formulating such guidelines. Currently, randomized clinical trials are considered optimal evidence, and very few major changes in the Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care are made without such. An alternative approach is to allow externally controlled clinical trials more weight in Guideline formulation and resuscitation protocol adoption. In Tucson, Arizona (USA), the Fire Department cardiac arrest database has revealed a number of resuscitation issues. These include a poor bystander CPR rate, a lack of response to initial defibrillation after prolonged ventricular fibrillation, and substantial time without chest compressions during the resuscitation effort. A local change in our previous resuscitation protocols had been instituted based upon this historical database information. PMID:15733752
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements. PMID:27362527
Statistical physics approach to earthquake occurrence and forecasting
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio
2016-04-01
There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different
Thermodynamics, reversibility and Jaynes' approach to statistical mechanics
NASA Astrophysics Data System (ADS)
Parker, Daniel N.
This dissertation contests David Albert's recent arguments that the proposition that the universe began in a particularly low entropy state (the "past hypothesis") is necessary and sufficient to ground the thermodynamic asymmetry against the reversibility objection, which states that the entropy of thermodynamic systems was previously larger than it is now. In turn, it argues that this undermines Albert's suggestion that the past hypothesis can underwrite other temporal asymmetries such as those of records and causation. This thesis thus concerns the broader philosophical problem of understanding the interrelationships among the various temporal asymmetries that we find in the world, such as those of thermodynamic phenomena, causation, human agency and inference. The position argued for is that the thermodynamic asymmetry is nothing more than an inferential asymmetry, reflecting a distinction between the inferences made towards the past and the future. As such, it cannot be used to derive a genuine physical asymmetry. At most, an inferential asymmetry can provide evidence for an asymmetry not itself forthcoming from the formalism of statistical mechanics. The approach offered here utilises an epistemic, information-theoretic interpretation of thermodynamics applied to individual "branch" systems in order to ground irreversible thermodynamic behaviour (Branch systems are thermodynamic systems quasi-isolated from their environments for short periods of time). I argue that such an interpretation solves the reversibility objection by treating thermodynamics as part of a more general theory of statistical inference supported by information theory and developed in the context of thermodynamics by E.T. Jaynes. It is maintained that by using an epistemic interpretation of probability (where the probabilities reflect one's knowledge about a thermodynamic system rather than a property of the system itself), the reversibility objection can be disarmed by severing the link
Multilayer Approach for Advanced Hybrid Lithium Battery.
Ming, Jun; Li, Mengliu; Kumar, Pushpendra; Li, Lain-Jong
2016-06-28
Conventional intercalated rechargeable batteries have shown their capacity limit, and the development of an alternative battery system with higher capacity is strongly needed for sustainable electrical vehicles and hand-held devices. Herein, we introduce a feasible and scalable multilayer approach to fabricate a promising hybrid lithium battery with superior capacity and multivoltage plateaus. A sulfur-rich electrode (90 wt % S) is covered by a dual layer of graphite/Li4Ti5O12, where the active materials S and Li4Ti5O12 can both take part in redox reactions and thus deliver a high capacity of 572 mAh gcathode(-1) (vs the total mass of electrode) or 1866 mAh gs(-1) (vs the mass of sulfur) at 0.1C (with the definition of 1C = 1675 mA gs(-1)). The battery shows unique voltage platforms at 2.35 and 2.1 V, contributed from S, and 1.55 V from Li4Ti5O12. A high rate capability of 566 mAh gcathode(-1) at 0.25C and 376 mAh gcathode(-1) at 1C with durable cycle ability over 100 cycles can be achieved. Operando Raman and electron microscope analysis confirm that the graphite/Li4Ti5O12 layer slows the dissolution/migration of polysulfides, thereby giving rise to a higher sulfur utilization and a slower capacity decay. This advanced hybrid battery with a multilayer concept for marrying different voltage plateaus from various electrode materials opens a way of providing tunable capacity and multiple voltage platforms for energy device applications. PMID:27268064
Statistical approaches and software for clustering islet cell functional heterogeneity
Wills, Quin F.; Boothe, Tobias; Asadi, Ali; Ao, Ziliang; Warnock, Garth L.; Kieffer, Timothy J.
2016-01-01
ABSTRACT Worldwide efforts are underway to replace or repair lost or dysfunctional pancreatic β-cells to cure diabetes. However, it is unclear what the final product of these efforts should be, as β-cells are thought to be heterogeneous. To enable the analysis of β-cell heterogeneity in an unbiased and quantitative way, we developed model-free and model-based statistical clustering approaches, and created new software called TraceCluster. Using an example data set, we illustrate the utility of these approaches by clustering dynamic intracellular Ca2+ responses to high glucose in ∼300 simultaneously imaged single islet cells. Using feature extraction from the Ca2+ traces on this reference data set, we identified 2 distinct populations of cells with β-like responses to glucose. To the best of our knowledge, this report represents the first unbiased cluster-based analysis of human β-cell functional heterogeneity of simultaneous recordings. We hope that the approaches and tools described here will be helpful for those studying heterogeneity in primary islet cells, as well as excitable cells derived from embryonic stem cells or induced pluripotent cells. PMID:26909740
Urban pavement surface temperature. Comparison of numerical and statistical approach
NASA Astrophysics Data System (ADS)
Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia
2015-04-01
The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.
Masked Areas in Shear Peak Statistics: A Forward Modeling Approach
NASA Astrophysics Data System (ADS)
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-01
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
Statistical approach to the analysis of cell desynchronization data
NASA Astrophysics Data System (ADS)
Milotti, Edoardo; Del Fabbro, Alessio; Dalla Pellegrina, Chiara; Chignola, Roberto
2008-07-01
Experimental measurements on semi-synchronous tumor cell populations show that after a few cell cycles they desynchronize completely, and this desynchronization reflects the intercell variability of cell-cycle duration. It is important to identify the sources of randomness that desynchronize a population of cells living in a homogeneous environment: for example, being able to reduce randomness and induce synchronization would aid in targeting tumor cells with chemotherapy or radiotherapy. Here we describe a statistical approach to the analysis of the desynchronization measurements that is based on minimal modeling hypotheses, and can be derived from simple heuristics. We use the method to analyze existing desynchronization data and to draw conclusions on the randomness of cell growth and proliferation.
Rate-equation approach to atomic-laser light statistics
Chusseau, Laurent; Arnaud, Jacques; Philippe, Fabrice
2002-11-01
We consider three- and four-level atomic lasers that are either incoherently (unidirectionally) or coherently (bidirectionally) pumped, the single-mode cavity being resonant with the laser transition. The intracavity Fano factor and the photocurrent spectral density are evaluated on the basis of rate equations. According to that approach, fluctuations are caused by jumps in active and detecting atoms. The algebra is simple. Whenever a comparison is made, the expressions obtained coincide with the previous results. The conditions under which the output light exhibits sub-Poissonian statistics are considered in detail. Analytical results, based on linearization, are verified by comparison with Monte Carlo simulations. An essentially exhaustive investigation of sub-Poissonian light generation by three- and four-level lasers has been performed. Only special forms were reported earlier.
Statistical physics approach to quantifying differences in myelinated nerve fibers
Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene
2014-01-01
We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross–sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146
Statistical physics approach to quantifying differences in myelinated nerve fibers
NASA Astrophysics Data System (ADS)
Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene
2014-03-01
We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.
A statistical approach to nuclear fuel design and performance
NASA Astrophysics Data System (ADS)
Cunning, Travis Andrew
As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance
Statistical approach to meteoroid shape estimation based on recovered meteorites
NASA Astrophysics Data System (ADS)
Vinnikov, V.; Gritsevich, M.; Turchak, L.
2014-07-01
Each meteorite sample can provide data on the chemical and physical properties of interplanetary matter. The set of recovered fragments within one meteorite fall can give additional information on the history of its parent asteroid. A reliably estimated meteoroid shape is a valuable input parameter for the atmospheric entry scenario, since the pre-entry mass, terminal meteorite mass, and fireball luminosity are proportional to the pre-entry shape factor of the meteoroid to the power of 3 [1]. We present a statistical approach to the estimation of meteoroid pre-entry shape [2], applied to the detailed data on recovered meteorite fragments. This is a development of our recent study on the fragment mass distribution functions for the Košice meteorite fall [3]. The idea of the shape estimation technique is based on experiments that show that brittle fracturing produces multiple fragments of sizes smaller than or equal to the smallest dimension of the body [2]. Such shattering has fractal properties similar to many other natural phenomena [4]. Thus, this self-similarity for scaling mass sequences can be described by the power law statistical expressions [5]. The finite mass and the number of fragments N are represented via an exponential cutoff for the maximum fragment mass m_U. The undersampling of tiny unrecoverable fragments is handled via an additional constraint on the minimum fragment mass m_L. The complementary cumulative distribution function has the form F( m)={N-j}/{m_j}( {m}/{m_j})^{-β_0}exp( {m-m_j}/{m_U}). The resulting parameters sought (scaling exponent β_0 and mass limits) are computed to fit the empirical fragment mass distribution: S(β_0, j, m_U) = sum_{i=j}^{N}[F(m_i)-{N-j}/{m_j}]^2, m_j = m_L. The scaling exponent correlates with the dimensionless shape parameter d [2]: 0.13d^2-0.21d+1.1-β=0, which, in turn, is expressed via the ratio of the linear dimensions a, b, c of the shattering body [2]: d = 1+2(ab+ac+bc)(a^2+b^2+c^2)^{-1}. We apply the
New Statistical Approach to the Analysis of Hierarchical Data
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Riva, M.
2014-12-01
Many variables possess a hierarchical structure reflected in how their increments vary in space and/or time. Quite commonly the increments (a) fluctuate in a highly irregular manner; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails that often decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (ESS) at all lags; and (e) display nonlinear scaling of power-law exponent with order of sample structure function. Some interpret this to imply that the variables are multifractal, which explains neither breakdowns in power-law scaling nor ESS. We offer an alternative interpretation consistent with all above phenomena. It views data as samples from stationary, anisotropic sub-Gaussian random fields subordinated to truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn). The fields are scaled Gaussian mixtures with random variances. Truncation of fBm and fGn entails filtering out components below data measurement or resolution scale and above domain scale. Our novel interpretation of the data allows us to obtain maximum likelihood estimates of all parameters characterizing the underlying truncated sub-Gaussian fields. These parameters in turn make it possible to downscale or upscale all statistical moments to situations entailing smaller or larger measurement or resolution and sampling scales, respectively. They also allow one to perform conditional or unconditional Monte Carlo simulations of random field realizations corresponding to these scales. Aspects of our approach are illustrated on field and laboratory measured porous and fractured rock permeabilities, as well as soil texture characteristics and neural network estimates of unsaturated hydraulic parameters in a deep vadose zone near Phoenix, Arizona. We also use our approach
Statistical physics and physiology: monofractal and multifractal approaches
NASA Technical Reports Server (NTRS)
Stanley, H. E.; Amaral, L. A.; Goldberger, A. L.; Havlin, S.; Peng, C. K.
1999-01-01
Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects.
A descriptive statistical approach to the Korean pharmacopuncture therapy.
Kim, Jungdae; Kang, Dae-In
2010-09-01
This paper reviews trends in research related to Korean pharmacopuncture therapy. Specifically, basic and clinical research in pharmacopuncture within the last decade is summarized by introducing categorical variables for classification. These variables are also analyzed for association. This literature review is based on articles published from February 1997 to December 2008 in a Korean journal, the Journal of the Korean Institute of Herbal Acupuncture, which was renamed the Journal of the Korean Pharmacopuncture Institute in 2007. Among the total of 379 papers published in the journal during this period, 164 papers were selected for their direct relevance to pharmacopuncture research and were categorized according to three variables: medicinal materials, acupuncture points and disease. The most frequently studied medicinal materials were bee-venom pharmacopuncture (42%), followed by meridian-field pharmacopuncture (24%), single-compound pharmacopuncture (24%), and eight-principle pharmacopuncture (10%). The frequency distributions of the acupuncture points and meridians for the injection of medicinal materials are presented. The most frequently used meridian and acupuncture point was the Bladder meridian and ST36, respectively. Contingency tables are also displayed to analyze the relationship between the categorized variables. Chi-squared analysis showed a significant association between the type of pharmacopuncture and disease. The trend in research reports on Korean pharmacopuncture therapy was reviewed and analyzed using a descriptive statistical approach to evaluate the therapeutic value of this technique for future research. PMID:20869014
Statistical physics and physiology: monofractal and multifractal approaches.
Stanley, H E; Amaral, L A; Goldberger, A L; Havlin, S; Ivanov PCh; Peng, C K
1999-08-01
Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects. PMID:11543220
Understanding Vrikshasana using body mounted sensors: A statistical approach
Yelluru, Suhas Niranjan; Shanbhag, Ranjith Ravindra; Omkar, SN
2016-01-01
Aim: A scheme for understanding how the human body organizes postural movements while performing Vrikshasana is developed in the format of this paper. Settings and Design: The structural characteristics of the body and the geometry of the muscular actions are incorporated into a graphical representation of the human movement mechanics in the frontal plane. A series of neural organizational hypotheses enables us to understand the mechanics behind the hip and ankle strategy: (1) Body sway in the mediolateral direction; and (2) influence of hip and ankle to correct instabilities caused in body while performing Vrikshasana. Materials and Methods: A methodological study on 10 participants was performed by mounting four inertial measurement units on the surface of the trapezius, thoracolumbar fascia, vastus lateralis, and gastrocnemius muscles. The kinematic accelerations of three mutually exclusive trials were recorded for a period of 30 s. Results: The results of every trial were processed using two different approaches namely statistical signal processing (variance and cross-correlation). Conclusions obtained from both these studies were in favor of the initial hypothesis. Conclusions: This study enabled us to understand the role of hip abductors and adductors, and ankle extensions and flexions in correcting the posture while performing Vrikshasana. PMID:26865765
ERIC Educational Resources Information Center
Hassan, Mahamood M.; Schwartz, Bill N.
2014-01-01
This paper discusses a student research project that is part of an advanced cost accounting class. The project emphasizes active learning, integrates cost accounting with macroeconomics and statistics by "learning by doing" using real world data. Students analyze sales data for a publicly listed company by focusing on the company's…
ERIC Educational Resources Information Center
Averitt, Sallie D.
This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills and instructing them in basic calculator…
ERIC Educational Resources Information Center
Billings, Paul H.
This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 6-hour introductory module on statistical process control (SPC), designed to develop competencies in the following skill areas: (1) identification of the three classes of SPC use; (2) understanding a process and how it works; (3)…
Statistical approach to color rendition properties of solid state light sources
NASA Astrophysics Data System (ADS)
Žukauskas, Artūras; Vaicekauskas, Rimantas; Tuzikas, Arūnas; Vitta, Pranciškus; Shur, Michael
2011-10-01
Versatile spectral power distribution of solid-state light sources offers vast possibilities in color rendition engineering. The optimization of such sources requires the development and psychophysical validation of an advanced metric for assessing their color quality. Here we report on the application and validation of the recently introduced statistical approach to color quality of illumination. This new metric uses the computational grouping of a large number of test color samples depending on the magnitude and direction of color-shift vectors in respect of just perceived differences of chromaticity and luminance. This approach introduces single-format statistical color rendition indices, such as Color Fidelity Index, Color Saturation Index and Color Dulling Index, which are the percentages of test color samples with particular behavior of the color-shift vectors. The new metric has been used for the classification of practical phosphor conversion white light-emitting diodes (LEDs) and polychromatic LED clusters into several distinct categories, such as high-fidelity, color saturating, and color dulling light sources. We also report on the development of the tetrachromatic light source with dynamically tailored color rendition properties and using this source for the psychophysical validation of the statistical metric and finding subjective preferences to the color quality of lighting.
Linear induction accelerator approach for advanced radiography
Caporaso, G.J.
1997-05-01
Recent advances in induction accelerator technology make it possible to envision a single accelerator that can serve as an intense, precision multiple pulse x-ray source for advanced radiography. Through the use of solid-state modulator technology repetition rates on the order of 1 MHz can be achieved with beam pulse lengths ranging from 200 ns to 2 {micro}secs. By using fast kickers, these pulses may be sectioned into pieces which are directed to different beam lines so as to interrogate the object under study from multiple lines of sight. The ultimate aim is to do a time dependent tomographic reconstruction of a dynamic object. The technology to accomplish these objectives along with a brief discussion of the experimental plans to verify it will be presented.
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor in determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor inmore » determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.« less
ERIC Educational Resources Information Center
Petocz, Agnes; Newbery, Glenn
2010-01-01
Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…
Symmetries and the approach to statistical equilibrium in isotropic turbulence
NASA Astrophysics Data System (ADS)
Clark, Timothy T.; Zemach, Charles
1998-11-01
The relaxation in time of an arbitrary isotropic turbulent state to a state of statistical equilibrium is identified as a transition to a state which is invariant under a symmetry group. We deduce the allowed self-similar forms and time-decay laws for equilibrium states by applying Lie-group methods (a) to a family of scaling symmetries, for the limit of high Reynolds number, as well as (b) to a unique scaling symmetry, for nonzero viscosity or nonzero hyperviscosity. This explains why a diverse collection of turbulence models, going back half a century, arrived at the same time-decay laws, either through derivations embedded in the mechanics of a particular model, or through numerical computation. Because the models treat the same dynamical variables having the same physical dimensions, they are subject to the same scaling invariances and hence to the same time-decay laws, independent of the eccentricities of their different formulations. We show in turn, by physical argument, by an explicitly solvable analytical model, and by numerical computation in more sophisticated models, that the physical mechanism which drives (this is distinct from the mathematical circumstance which allows) the relaxation to equilibrium is the cascade of turbulence energy toward higher wave numbers, with the rate of cascade approaching zero in the low wave-number limit and approaching infinity in the high wave-number limit. Only the low-wave-number properties of the initial state can influence the equilibrium state. This supplies the physical basis, beyond simple dimensional analysis, for quantitative estimates of relaxation times. These relaxation times are estimated to be as large as hundreds or more times the initial dominant-eddy cycle times, and are determined by the large-eddy cycle times. This mode of analysis, applied to a viscous turbulent system in a wind tunnel with typical initial laboratory parameters, shows that the time necessary to reach the final stage of decay is
Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach
NASA Astrophysics Data System (ADS)
Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.
2010-12-01
Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Artificial Intelligence Approach to Support Statistical Quality Control Teaching
ERIC Educational Resources Information Center
Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno
2006-01-01
Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…
Advancing Instructional Communication: Integrating a Biosocial Approach
ERIC Educational Resources Information Center
Horan, Sean M.; Afifi, Tamara D.
2014-01-01
Celebrating 100 years of the National Communication Association necessitates that, as we commemorate our past, we also look toward our future. As part of a larger conversation about the future of instructional communication, this essay reinvestigates the importance of integrating biosocial approaches into instructional communication research. In…
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Advanced drug delivery approaches against periodontitis.
Joshi, Deeksha; Garg, Tarun; Goyal, Amit K; Rath, Goutam
2016-01-01
Periodontitis is an inflammatory disease of gums involving the degeneration of periodontal ligaments, creation of periodontal pocket and resorption of alveolar bone, resulting in the disruption of the support structure of teeth. According to WHO, 10-15% of the global population suffers from severe periodontitis. The disease results from the growth of a diverse microflora (especially anaerobes) in the pockets and release of toxins, enzymes and stimulation of body's immune response. Various local or systemic approaches were used for an effective treatment of periodontitis. Currently, controlled local drug delivery approach is more favorable as compared to systemic approach because it mainly focuses on improving the therapeutic outcomes by achieving factors like site-specific delivery, low dose requirement, bypass of first-pass metabolism, reduction in gastrointestinal side effects and decrease in dosing frequency. Overall it provides a safe and effective mode of treatment, which enhances patient compliance. Complete eradication of the organisms from the sites was not achieved by using various surgical and mechanical treatments. So a number of polymer-based delivery systems like fibers, films, chips, strips, microparticles, nanoparticles and nanofibers made from a variety of natural and synthetic materials have been successfully tested to deliver a variety of drugs. These systems are biocompatible and biodegradable, completely fill the pockets, and have strong retention on the target site due to excellent mucoadhesion properties. The review summarizes various available and recently developing targeted delivery devices for the treatment of periodontitis. PMID:25005586
Statistical Approach To Extraction Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1992-01-01
Improved statistical method of extraction of textural features in synthetic-aperture-radar (SAR) images takes account of effects of scheme used to sample raw SAR data, system noise, resolution of radar equipment, and speckle. Treatment of speckle incorporated into overall statistical treatment of speckle, system noise, and natural variations in texture. One computes speckle auto-correlation function from system transfer function that expresses effect of radar aperature and incorporates range and azimuth resolutions.
Advances in myelofibrosis: a clinical case approach.
Mascarenhas, John O; Orazi, Attilio; Bhalla, Kapil N; Champlin, Richard E; Harrison, Claire; Hoffman, Ronald
2013-10-01
Primary myelofibrosis is a member of the myeloproliferative neoplasms, a diverse group of bone marrow malignancies. Symptoms of myelofibrosis, particularly those associated with splenomegaly (abdominal distention and pain, early satiety, dyspnea, and diarrhea) and constitutional symptoms, represent a substantial burden to patients. Most patients eventually die from the disease, with a median survival ranging from approximately 5-7 years. Mutations in Janus kinase 2 (JAK2), a kinase that is essential for the normal development of erythrocytes, granulocytes, and platelets, notably the V617F mutation, have been identified in approximately 50% of patients with myelofibrosis. The approval of a JAK2 inhibitor in 2011 has improved the outlook of many patients with myelofibrosis and has changed the treatment landscape. This article focuses on some of the important issues in current myelofibrosis treatment management, including differentiation of myelofibrosis from essential thrombocythemia and polycythemia vera, up-dated data on the results of JAK2 inhibitor therapy, the role of epigenetic mechanisms in myelofibrosis pathogenesis, investigational therapies for myelofibrosis, and advances in hematopoietic stem cell transplant. Three myelofibrosis cases are included to underscore the issues in diagnosing and treating this complex disease. PMID:24091929
Advancing Profiling Sensors with a Wireless Approach
Galvis, Alex; Russomanno, David J.
2012-01-01
The notion of a profiling sensor was first realized by a Near-Infrared (N-IR) retro-reflective prototype consisting of a vertical column of wired sparse detectors. This paper extends that prior work and presents a wireless version of a profiling sensor as a collection of sensor nodes. The sensor incorporates wireless sensing elements, a distributed data collection and aggregation scheme, and an enhanced classification technique. In this novel approach, a base station pre-processes the data collected from the sensor nodes and performs data re-alignment. A back-propagation neural network was also developed for the wireless version of the N-IR profiling sensor that classifies objects into the broad categories of human, animal or vehicle with an accuracy of approximately 94%. These enhancements improve deployment options as compared with the first generation of wired profiling sensors, possibly increasing the application scenarios for such sensors, including intelligent fence applications. PMID:23443371
Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods
NASA Astrophysics Data System (ADS)
Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.
2015-02-01
This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.
Europe's Neogene and Quaternary lake gastropod diversity - a statistical approach
NASA Astrophysics Data System (ADS)
Neubauer, Thomas A.; Georgopoulou, Elisavet; Harzhauser, Mathias; Mandic, Oleg; Kroh, Andreas
2014-05-01
During the Neogene Europe's geodynamic history gave rise to several long-lived lakes with conspicuous endemic radiations. However, such lacustrine systems are rare today as well as in the past compared to the enormous numbers of "normal" lakes. Most extant European lakes are mainly results of the Ice Ages and are due to their (geologically) temporary nature largely confined to the Pleistocene-Holocene. As glacial lakes are also geographically restricted to glacial regions (and their catchment areas) their preservation potential is fairly low. Also deposits of streams, springs, and groundwater, which today are inhabited by species-rich gastropod assemblages, are rarely preserved. Thus, the pre-Quaternary lacustrine record is biased towards long-lived systems, such as the Late Miocene Lake Pannon, the Early to Middle Miocene Dinaride Lake System, the Middle Miocene Lake Steinheim and several others. All these systems have been studied for more than 150 years concerning their mollusk inventories and the taxonomic literature is formidable. However, apart from few general overviews precise studies on the γ-diversities of the post-Oligocene European lake systems and the shifting biodiversity in European freshwater systems through space and time are entirely missing. Even for the modern faunas, literature on large-scale freshwater gastropod diversity in extant lakes is scarce and lacks a statistical approach. Our preliminary data suggest fundamental differences between modern and pre-Pleistocene freshwater biogeography in central Europe. A rather homogenous central European Pleistocene and Holocene lake fauna is contrasted by considerable provincialism during the early Middle Miocene. Aside from the ancient Dessaretes lakes of the Balkan Peninsula, Holocene lake faunas are dominated by planorbids and lymnaeids in species numbers. This composition differs considerably from many Miocene and Pliocene lake faunas, which comprise pyrgulid-, hydrobiid-, viviparid-, melanopsid
ERIC Educational Resources Information Center
Johnson, H. Dean; Dasgupta, Nairanjana; Zhang, Hao; Evans, Marc A.
2009-01-01
The use of the Internet as a teaching tool continues to grow in popularity at colleges and universities. We consider, from the students' perspective, the use of an Internet approach compared to a lecture and lab-based approach for teaching an introductory course in statistical methods. We conducted a survey of introductory statistics students.…
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
A BAYESIAN STATISTICAL APPROACHES FOR THE EVALUATION OF CMAQ
This research focuses on the application of spatial statistical techniques for the evaluation of the Community Multiscale Air Quality (CMAQ) model. The upcoming release version of the CMAQ model was run for the calendar year 2001 and is in the process of being evaluated by EPA an...
An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics
ERIC Educational Resources Information Center
Ellis, Frank B.; Ellis, David C.
2008-01-01
Introductory statistical mechanics is studied for a simple two-state system using an inexpensive and easily built apparatus. A large variety of demonstrations, suitable for students in high school and introductory university chemistry courses, are possible. This article details demonstrations for exothermic and endothermic reactions, the dynamic…
A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ
Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...
Teaching MBA Statistics Online: A Pedagogically Sound Process Approach
ERIC Educational Resources Information Center
Grandzol, John R.
2004-01-01
Delivering MBA statistics in the online environment presents significant challenges to education and students alike because of varying student preparedness levels, complexity of content, difficulty in assessing learning outcomes, and faculty availability and technological expertise. In this article, the author suggests a process model that…
Analysis of Coastal Dunes: A Remote Sensing and Statistical Approach.
ERIC Educational Resources Information Center
Jones, J. Richard
1985-01-01
Remote sensing analysis and statistical methods were used to analyze the coastal dunes of Plum Island, Massachusetts. The research methodology used provides an example of a student project for remote sensing, geomorphology, or spatial analysis courses at the university level. (RM)
Statistical and Microscopic Approach to Gas Phase Chemical Kinetics.
ERIC Educational Resources Information Center
Perez, J. M.; Quereda, R.
1983-01-01
Describes advanced undergraduate laboratory exercise examining the dependence of the rate constants and the instantaneous concentrations with the nature and energy content in a gas-phase complex reaction. Computer program (with instructions and computation flow charts) used with the exercise is available from the author. (Author/JN)
Statistical approaches to pharmacodynamic modeling: motivations, methods, and misperceptions.
Mick, R; Ratain, M J
1993-01-01
We have attempted to outline the fundamental statistical aspects of pharmacodynamic modeling. Unexpected yet substantial variability in effect in a group of similarly treated patients is the key motivation for pharmacodynamic investigations. Pharmacokinetic and/or pharmacodynamic factors may influence this variability. Residual variability in effect that persists after accounting for drug exposure indicates that further statistical modeling with pharmacodynamic factors is warranted. Factors that significantly predict interpatient variability in effect may then be employed to individualize the drug dose. In this paper we have emphasized the need to understand the properties of the effect measure and explanatory variables in terms of scale, distribution, and statistical relationship. The assumptions that underlie many types of statistical models have been discussed. The role of residual analysis has been stressed as a useful method to verify assumptions. We have described transformations and alternative regression methods that are employed when these assumptions are found to be in violation. Sequential selection procedures for the construction of multivariate models have been presented. The importance of assessing model performance has been underscored, most notably in terms of bias and precision. In summary, pharmacodynamic analyses are now commonly performed and reported in the oncologic literature. The content and format of these analyses has been variable. The goals of such analyses are to identify and describe pharmacodynamic relationships and, in many cases, to propose a statistical model. However, the appropriateness and performance of the proposed model are often difficult to judge. Table 1 displays suggestions (in a checklist format) for structuring the presentation of pharmacodynamic analyses, which reflect the topics reviewed in this paper. PMID:8269582
Students' Attitudes toward Statistics across the Disciplines: A Mixed-Methods Approach
ERIC Educational Resources Information Center
Griffith, James D.; Adams, Lea T.; Gu, Lucy L.; Hart, Christian L.; Nichols-Whitehead, Penney
2012-01-01
Students' attitudes toward statistics were investigated using a mixed-methods approach including a discovery-oriented qualitative methodology among 684 undergraduate students across business, criminal justice, and psychology majors where at least one course in statistics was required. Students were asked about their attitudes toward statistics and…
Class G cement in Brazil - A statistical approach
Rosa, F.C.; Coelho, O. Jr.; Parente, F.J. )
1993-09-01
Since 1975, Petrobras has worked with Brazilian Portland cement manufacturers to develop high-quality Class G cements. The Petrobras R and D Center has analyzed each batch of Class G cement manufactured by prequalified producers to API Spec. 10 standards and to Brazilian Assoc. of Technical Standards (ABNT) NBR 9831 standards. As a consequence, the Drilling Dept. at Petrobras now is supplied by three approved Class G cement factories strategically located in Brazil. This paper statistically analyzes test results on the basis of physical parameters of these Class G cements over 3 years. Statistical indices are reported to evaluate dispersion of the physical properties to obtain a reliability index for each Class G cement.
Statistical approach to linewidth control in a logic fab
NASA Astrophysics Data System (ADS)
Pitter, Michael; Doleschel, Bernhard; Eibl, Ludwig; Steinkirchner, Erwin; Grassmann, Andreas
1999-04-01
We designed an adaptive line width controller specially tailored to the needs of a highly diversified logic fab. Simulations of different controller types fed with historic CD data show advantages of an SPC based controller over a Run by Run controller. This result confirms the SPC assumption that as long as a process is in statistical control, changing the process parameters will only increase the variability of the output.
A statistical mechanics approach to autopoietic immune networks
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2010-07-01
In this work we aim to bridge theoretical immunology and disordered statistical mechanics. We introduce a model for the behavior of B-cells which naturally merges the clonal selection theory and the autopoietic network theory as a whole. From the analysis of its features we recover several basic phenomena such as low-dose tolerance, dynamical memory of antigens and self/non-self discrimination.
W± bosons production in the quantum statistical parton distributions approach
NASA Astrophysics Data System (ADS)
Bourrely, Claude; Buccella, Franco; Soffer, Jacques
2013-10-01
We consider W± gauge bosons production in connection with recent results from BNL-RHIC and FNAL-Tevatron and interesting predictions from the statistical parton distributions. They concern relevant aspects of the structure of the nucleon sea and the high-x region of the valence quark distributions. We also give predictions in view of future proton-neutron collisions experiments at BNL-RHIC.
Extreme event statistics of daily rainfall: dynamical systems approach
NASA Astrophysics Data System (ADS)
Cigdem Yalcin, G.; Rabassa, Pau; Beck, Christian
2016-04-01
We analyse the probability densities of daily rainfall amounts at a variety of locations on Earth. The observed distributions of the amount of rainfall fit well to a q-exponential distribution with exponent q close to q≈ 1.3. We discuss possible reasons for the emergence of this power law. In contrast, the waiting time distribution between rainy days is observed to follow a near-exponential distribution. A careful investigation shows that a q-exponential with q≈ 1.05 yields the best fit of the data. A Poisson process where the rate fluctuates slightly in a superstatistical way is discussed as a possible model for this. We discuss the extreme value statistics for extreme daily rainfall, which can potentially lead to flooding. This is described by Fréchet distributions as the corresponding distributions of the amount of daily rainfall decay with a power law. Looking at extreme event statistics of waiting times between rainy days (leading to droughts for very long dry periods) we obtain from the observed near-exponential decay of waiting times extreme event statistics close to Gumbel distributions. We discuss superstatistical dynamical systems as simple models in this context.
A Statistical Approach to Characterizing the Reliability of Systems Utilizing HBT Devices
NASA Technical Reports Server (NTRS)
Chen, Yuan; Wang, Qing; Kayali, Sammy
2004-01-01
This paper presents a statistical approach to characterizing the reliability of systems with HBT devices. The proposed approach utilizes the statistical reliability information of the HBT individual devices, along with the analysis on the critical paths of the system, to provide more accurate and more comprehensive reliability information about the HBT systems compared to the conventional worst-case method.
NASA Astrophysics Data System (ADS)
Tsallis, Constantino
2006-03-01
Boltzmann-Gibbs ( BG) statistical mechanics is, since well over one century, successfully used for many nonlinear dynamical systems which, in one way or another, exhibit strong chaos. A typical case is a classical many-body short-range-interacting Hamiltonian system (e.g., the Lennard-Jones model for a real gas at moderately high temperature). Its Lyapunov spectrum (which characterizes the sensitivity to initial conditions) includes positive values. This leads to ergodicity, the stationary state being thermal equilibrium, hence standard applicability of the BG theory is verified. The situation appears to be of a different nature for various phenomena occurring in living organisms. Indeed, such systems exhibit a complexity which does not really accommodate with this standard dynamical behavior. Life appears to emerge and evolve in a kind of delicate situation, at the frontier between large order (low adaptability and long memory; typically characterized by regular dynamics, hence only nonpositive Lyapunov exponents) and large disorder (high adaptability and short memory; typically characterized by strong chaos, hence at least one positive Lyapunov exponent). Along this frontier, the maximal relevant Lyapunov exponents are either zero or close to that, characterizing what is currently referred to as weak chaos. This type of situation is shared by a great variety of similar complex phenomena in economics, linguistics, to cite but a few. BG statistical mechanics is built upon the entropy S=-k∑plnp. A generalization of this form, S=k(1-∑piq)/(q-1) (with S=S), has been proposed in 1988 as a basis for formulating what is nowadays currently called nonextensive statistical mechanics. This theory appears to be particularly adapted for nonlinear dynamical systems exhibiting, precisely, weak chaos. Here, we briefly review the theory, its dynamical foundation, its applications in a variety of disciplines (with special emphasis to living systems), and its connections with
Statistical Thermodynamic Approach to Vibrational Solitary Waves in Acetanilide
NASA Astrophysics Data System (ADS)
Vasconcellos, Áurea R.; Mesquita, Marcus V.; Luzzi, Roberto
1998-03-01
We analyze the behavior of the macroscopic thermodynamic state of polymers, centering on acetanilide. The nonlinear equations of evolution for the populations and the statistically averaged field amplitudes of CO-stretching modes are derived. The existence of excitations of the solitary wave type is evidenced. The infrared spectrum is calculated and compared with the experimental data of Careri et al. [Phys. Rev. Lett. 51, 104 (1983)], resulting in a good agreement. We also consider the situation of a nonthermally highly excited sample, predicting the occurrence of a large increase in the lifetime of the solitary wave excitation.
A Statistical Approach to Establishing Subsystem Environmental Test Specifications
NASA Technical Reports Server (NTRS)
Keegan, W. B.
1974-01-01
Results are presented of a research task to evaluate structural responses at various subsystem mounting locations during spacecraft level test exposures to the environments of mechanical shock, acoustic noise, and random vibration. This statistical evaluation is presented in the form of recommended subsystem test specifications for these three environments as normalized to a reference set of spacecraft test levels and are thus suitable for extrapolation to a set of different spacecraft test levels. The recommendations are dependent upon a subsystem's mounting location in a spacecraft, and information is presented on how to determine this mounting zone for a given subsystem.
Recent Advances in Targeted Drug Delivery Approaches Using Dendritic Polymers
Bugno, Jason; Hsu, Hao-Jui; Hong, Seungpyo
2014-01-01
Since they were first synthesized over 30 years ago, dendrimers have seen rapid translation into various biomedical applications. A number of reports have not only demonstrated their clinical utility, but also revealed novel design approaches and strategies based on the elucidation of underlying mechanisms governing their biological interactions. This review focuses on presenting the latest advances in dendrimer design, discussing the current mechanistic understandings, and highlighting recent developments and targeted approaches using dendrimers in drug/gene delivery. PMID:26221937
Geo-Statistical Approach to Estimating Asteroid Exploration Parameters
NASA Technical Reports Server (NTRS)
Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles
2011-01-01
NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.
Demarcating Advanced Learning Approaches from Methodological and Technological Perspectives
ERIC Educational Resources Information Center
Horvath, Imre; Peck, David; Verlinden, Jouke
2009-01-01
In the field of design and engineering education, the fast and expansive evolution of information and communication technologies is steadily converting traditional learning approaches into more advanced ones. Facilitated by Broadband (high bandwidth) personal computers, distance learning has developed into web-hosted electronic learning. The…
New Therapeutic Approaches for Advanced Gastrointestinal Stromal Tumors (GISTs)
Somaiah, Neeta
2010-01-01
Synopsis The management of advanced GIST is increasingly complex due to imatinib refractory disease. Primary resistance to imatinib is uncommon, and most patients progress after development of additional genetic changes. This article reviews management strategies including surgical approaches, local modalities for progressive liver metastases, as well as novel therapeutic agents. PMID:19248977
Advance Approach to Concept and Design Studies for Space Missions
NASA Technical Reports Server (NTRS)
Deutsch, M.; Nichols, J.
1999-01-01
Recent automated and advanced techniques developed at JPL have created a streamlined and fast-track approach to initial mission conceptualization and system architecture design, answering the need for rapid turnaround of trade studies for potential proposers, as well as mission and instrument study groups.
A statistical modeling approach for detecting generalized synchronization
Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon
2012-01-01
Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex. PMID:23004851
Bayesian statistical approach to binary asteroid orbit determination
NASA Astrophysics Data System (ADS)
Kovalenko, Irina D.; Stoica, Radu S.; Emelyanov, N. V.; Doressoundiram, A.; Hestroffer, D.
2016-01-01
The problem of binary asteroids orbit determination is of particular interest, given knowledge of the orbit is the best way to derive the mass of the system. Orbit determination from observed points is a classic problem of celestial mechanics. However, in the case of binary asteroids, particularly with a small number of observations, the solution is not evident to derive. In the case of resolved binaries the problem consists in the determination of the relative orbit from observed relative positions of a secondary asteroid with respect to the primary. In this work, the problem is investigated as a statistical inverse problem. Within this context, we propose a method based on Bayesian modelling together with a global optimisation procedure that is based on the simulated annealing algorithm.
Nonextensive statistical mechanics approach to electron trapping in degenerate plasmas
NASA Astrophysics Data System (ADS)
Mebrouk, Khireddine; Gougam, Leila Ait; Tribeche, Mouloud
2016-06-01
The electron trapping in a weakly nondegenerate plasma is reformulated and re-examined by incorporating the nonextensive entropy prescription. Using the q-deformed Fermi-Dirac distribution function including the quantum as well as the nonextensive statistical effects, we derive a new generalized electron density with a new contribution proportional to the electron temperature T, which may dominate the usual thermal correction (∼T2) at very low temperatures. To make the physics behind the effect of this new contribution more transparent, we analyze the modifications arising in the propagation of ion-acoustic solitary waves. Interestingly, we find that due to the nonextensive correction, our plasma model allows the possibility of existence of quantum ion-acoustic solitons with velocity higher than the Fermi ion-sound velocity. Moreover, as the nonextensive parameter q increases, the critical temperature Tc beyond which coexistence of compressive and rarefactive solitons sets in, is shifted towards higher values.
Statistical mechanics approach to lock-key supramolecular chemistry interactions.
Odriozola, Gerardo; Lozada-Cassou, Marcelo
2013-03-01
In the supramolecular chemistry field, intuitive concepts such as molecular complementarity and molecular recognition are used to explain the mechanism of lock-key associations. However, these concepts lack a precise definition, and consequently this mechanism is not well defined and understood. Here we address the physical basis of this mechanism, based on formal statistical mechanics, through Monte Carlo simulation and compare our results with recent experimental data for charged or uncharged lock-key colloids. We find that, given the size range of the molecules involved in these associations, the entropy contribution, driven by the solvent, rules the interaction, over that of the enthalpy. A universal behavior for the uncharged lock-key association is found. Based on our results, we propose a supramolecular chemistry definition. PMID:23521272
A Statistical Approach To An Expert Diagnostic Ultrasonic System
NASA Astrophysics Data System (ADS)
Insana, Michael F.; Wagner, Robert F.; Garra, Brian S.; Shawker, Thomas H.
1986-06-01
The techniques of statistical pattern recognition are implemented to determine the best combination of tissue characterization parameters for maximizing the diagnostic accuracy of a given task. In this paper, we considered combinations of four ultrasonic tissue parameters to discriminate between normal liver and chronic hepatitis. The separation between normal and diseased samples was made by application of the Bayes test for minimum risk which minimizes the error rate for classifying tissue states while including the prior probability for the presence of disease and the cost of misclassification. Large differences in classification performance of various tissue parameter combinations were demonstrated by ROC analysis. The power of additional features to classify tissue states, even those derived from other imaging modalities, can be compared directly in this manner.
A statistical approach to the temporal development of orbital associations
NASA Astrophysics Data System (ADS)
Kastinen, D.; Kero, J.
2016-01-01
We have performed preliminary studies on the use of a Monte-Carlo based statistical toolbox for small body solar system dynamics to find trends in the temporal development of orbital associations. As a part of this preliminary study four different similarity functions where implemented and applied to the 21P/Giacobini-Zinner meteoroid stream, and resulting simulated meteor showers. The simulations indicate that the temporal behavior of orbital element distributions in the meteoroid stream and the meteor shower differ on century size time scales. The configuration of the meteor shower remains compact for a long time and dissipates an order of magnitude slower than the stream. The main effect driving the shower dissipation is shown to be the addition of new trails to the stream.
Statistical Approaches to Aerosol Dynamics for Climate Simulation
Zhu, Wei
2014-09-02
In this work, we introduce two general non-parametric regression analysis methods for errors-in-variable (EIV) models: the compound regression, and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer a practical solution when the ratio of the error variances is unknown. Each includes the classic non-parametric regression methods of ordinary least squares, geometric mean regression, and orthogonal regression as special cases. Both methods can be readily generalized to multiple linear regression with two or more random regressors.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
Application of statistical physics approaches to complex organizations
NASA Astrophysics Data System (ADS)
Matia, Kaushik
The first part of this thesis studies two different kinds of financial markets, namely, the stock market and the commodity market. Stock price fluctuations display certain scale-free statistical features that are not unlike those found in strongly-interacting physical systems. The possibility that new insights can be gained using concepts and methods developed to understand scale-free physical phenomena has stimulated considerable research activity in the physics community. In the first part of this thesis a comparative study of stocks and commodities is performed in terms of probability density function and correlations of stock price fluctuations. It is found that the probability density of the stock price fluctuation has a power law functional form with an exponent 3, which is similar across different markets around the world. We present an autoregressive model to explain the origin of the power law functional form of the probability density function of the price fluctuation. The first part also presents the discovery of unique features of the Indian economy, which we find displays a scale-dependent probability density function. In the second part of this thesis we quantify the statistical properties of fluctuations of complex systems like business firms and world scientific publications. We analyze class size of these systems mentioned above where units agglomerate to form classes. We find that the width of the probability density function of growth rate decays with the class size as a power law with an exponent beta which is universal in the sense that beta is independent of the system studied. We also identify two other scaling exponents, gamma connecting the unit size to the class size and gamma connecting the number of units to the class size, where products are units and firms are classes. Finally we propose a generalized preferential attachment model to describe the class size distribution. This model is successful in explaining the growth rate and class
Advanced Safeguards Approaches for New TRU Fuel Fabrication Facilities
Durst, Philip C.; Ehinger, Michael H.; Boyer, Brian; Therios, Ike; Bean, Robert; Dougan, A.; Tolk, K.
2007-12-15
This second report in a series of three reviews possible safeguards approaches for the new transuranic (TRU) fuel fabrication processes to be deployed at AFCF – specifically, the ceramic TRU (MOX) fuel fabrication line and the metallic (pyroprocessing) line. The most common TRU fuel has been fuel composed of mixed plutonium and uranium dioxide, referred to as “MOX”. However, under the Advanced Fuel Cycle projects custom-made fuels with higher contents of neptunium, americium, and curium may also be produced to evaluate if these “minor actinides” can be effectively burned and transmuted through irradiation in the ABR. A third and final report in this series will evaluate and review the advanced safeguards approach options for the ABR. In reviewing and developing the advanced safeguards approach for the new TRU fuel fabrication processes envisioned for AFCF, the existing international (IAEA) safeguards approach at the Plutonium Fuel Production Facility (PFPF) and the conceptual approach planned for the new J-MOX facility in Japan have been considered as a starting point of reference. The pyro-metallurgical reprocessing and fuel fabrication process at EBR-II near Idaho Falls also provided insight for safeguarding the additional metallic pyroprocessing fuel fabrication line planned for AFCF.
Clusterization of water molecules as deduced from statistical mechanical approach
NASA Astrophysics Data System (ADS)
Krasnoholovets, Volodymyr
2004-12-01
Using the methods of statistical mechanics we have shown that a homogeneous water network is unstable and spontaneously disintegrates to the nonhomogeneous state (i.e. peculiar clusters), which can be treated as an ordinary state of liquid water. The major peculiarity of the concept is that it separates the paired potential into two independent components—the attractive potential and the repulsive one, which in turn should feature a very different dependence on the distance from the particle (a water molecule in the present case). We choose the interaction potential as a combination of the ionic crystal potential and the vibratory potential associated with the elastic properties of the water system as a whole. The number ℵ of water molecules that enters a cluster is calculated as a function of several parameters, such as the dielectric constant, the mass of a water molecule, the distance between nearest molecules, and the vibrations of nearest molecules in their nodes. The number of H2O molecules that comprise a cluster is estimated as about ℵ ≈ 900, which agrees with the available experimental data.
Glass viscosity calculation based on a global statistical modelling approach
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
[Statistical Process Control applied to viral genome screening: experimental approach].
Reifenberg, J M; Navarro, P; Coste, J
2001-10-01
During the National Multicentric Study concerning the introduction of NAT for HCV and HIV-1 viruses in blood donation screening which was supervised by the Medical and Scientific departments of the French Blood Establishment (Etablissement français du sang--EFS), Transcription-Mediated transcription Amplification (TMA) technology (Chiron/Gen Probe) was experimented in the Molecular Biology Laboratory of Montpellier, EFS Pyrénées-Méditerranée. After a preliminary phase of qualification of the material and training of the technicians, routine screening of homologous blood and apheresis donations using this technology was applied for two months. In order to evaluate the different NAT systems, exhaustive daily operations and data were registered. Among these, the luminescence results expressed as RLU of the positive and negative calibrators and the associated internal controls were analysed using Control Charts, Statistical Process Control methods, which allow us to display rapidly process drift and to anticipate the appearance of incidents. This study demonstrated the interest of these quality control methods, mainly used for industrial purposes, to follow and to increase the quality of any transfusion process. it also showed the difficulties of the post-investigations of uncontrolled sources of variations of a process which was experimental. Such tools are in total accordance with the new version of the ISO 9000 norms which are particularly focused on the use of adapted indicators for processes control, and could be extended to other transfusion activities, such as blood collection and component preparation. PMID:11729395
The interaction of physical properties of seawater via statistical approach
NASA Astrophysics Data System (ADS)
Hamzah, Firdaus Mohamad; Jaafar, Othman; Sabri, Samsul Rijal Mohd; Ismail, Mohd Tahir; Jaafar, Khamisah; Arbin, Norazman
2015-09-01
It is of importance to determine the relationships between physical parameters in marine ecology. Model and expert opinion are needed for exploration of the form of relationship between two parameters due to the complexity of the ecosystems. These need justification with observed data over a particular periods. Novel statistical techniques such as nonparametric regression is presented to investigate the ecological relationships. These are achieved by demonstrating the features of pH, salinity and conductivity at in Straits of Johor. The monthly data measurements from 2004 until 2013 at a chosen sampling location are examined. Testing for no-effect followed by linearity testing for the relationships between salinity and pH; conductivity and pH, and conductivity and salinity are carried out, with the ecological objectives of investigating the evidence of changes in each of the above physical parameters. The findings reveal the appropriateness of smooth function to explain the variation of pH in response to the changes in salinity whilst the changes in conductivity with regards to different concentrations of salinity could be modelled parametrically. The analysis highlights the importance of both parametric and nonparametric models for assessing ecological response to environmental change in seawater.
Statistical approach to anatomical landmark extraction in AP radiographs
NASA Astrophysics Data System (ADS)
Bernard, Rok; Pernus, Franjo
2001-07-01
A novel method for the automated extraction of important geometrical parameters of the pelvis and hips from APR images is presented. The shape and intensity variations in APR images are encompassed by the statistical shape and appearance models built from a set of training images for each of the three anatomies, i.e., pelvis, right and left hip, separately. The identification of the pelvis and hips is defined as a flexible object recognition problem, which is solved by generating anatomically plausible object instances and matching them to the APR image. The criterion function minimizes the resulting match error and considers the object topology. The obtained flexible object defines the positions of anatomical landmarks, which are further used to calculate the hip joint contact stress. A leave-one-out test was used to evaluate the performance of the proposed method on a set of 26 APR images. The results show the method is able to properly treat image variations and can reliably and accurately identify anatomies in the image and extract the anatomical landmarks needed in the hip joint contact stress calculation.
Territorial developments based on graffiti: A statistical mechanics approach
NASA Astrophysics Data System (ADS)
Barbaro, Alethea B. T.; Chayes, Lincoln; D'Orsogna, Maria R.
2013-01-01
We study the well-known sociological phenomenon of gang aggregation and territory formation through an interacting agent system defined on a lattice. We introduce a two-gang Hamiltonian model where agents have red or blue affiliation but are otherwise indistinguishable. In this model, all interactions are indirect and occur only via graffiti markings, on-site as well as on nearest neighbor locations. We also allow for gang proliferation and graffiti suppression. Within the context of this model, we show that gang clustering and territory formation may arise under specific parameter choices and that a phase transition may occur between well-mixed, possibly dilute configurations and well separated, clustered ones. Using methods from statistical mechanics, we study the phase transition between these two qualitatively different scenarios. In the mean-fields rendition of this model, we identify parameter regimes where the transition is first or second order. In all cases, we have found that the transitions are a consequence solely of the gang to graffiti couplings, implying that direct gang to gang interactions are not strictly necessary for gang territory formation; in particular, graffiti may be the sole driving force behind gang clustering. We further discuss possible sociological-as well as ecological-ramifications of our results.
Statistical approaches to short-term electricity forecasting
NASA Astrophysics Data System (ADS)
Kellova, Andrea
The study of the short-term forecasting of electricity demand has played a key role in the economic optimization of the electric energy industry and is essential for power systems planning and operation. In electric energy markets, accurate short-term forecasting of electricity demand is necessary mainly for economic operations. Our focus is directed to the question of electricity demand forecasting in the Czech Republic. Firstly, we describe the current structure and organization of the Czech, as well as the European, electricity market. Secondly, we provide a complex description of the most powerful external factors influencing electricity consumption. The choice of the most appropriate model is conditioned by these electricity demand determining factors. Thirdly, we build up several types of multivariate forecasting models, both linear and nonlinear. These models are, respectively, linear regression models and artificial neural networks. Finally, we compare the forecasting power of both kinds of models using several statistical accuracy measures. Our results suggest that although the electricity demand forecasting in the Czech Republic is for the considered years rather a nonlinear than a linear problem, for practical purposes simple linear models with nonlinear inputs can be adequate. This is confirmed by the values of the empirical loss function applied to the forecasting results.
Bayesian Statistical Approach To Binary Asteroid Orbit Determination
NASA Astrophysics Data System (ADS)
Dmitrievna Kovalenko, Irina; Stoica, Radu S.
2015-08-01
Orbit determination from observations is one of the classical problems in celestial mechanics. Deriving the trajectory of binary asteroid with high precision is much more complicate than the trajectory of simple asteroid. Here we present a method of orbit determination based on the algorithm of Monte Carlo Markov Chain (MCMC). This method can be used for the preliminary orbit determination with relatively small number of observations, or for adjustment of orbit previously determined.The problem consists on determination of a conditional a posteriori probability density with given observations. Applying the Bayesian statistics, the a posteriori probability density of the binary asteroid orbital parameters is proportional to the a priori and likelihood probability densities. The likelihood function is related to the noise probability density and can be calculated from O-C deviations (Observed minus Calculated positions). The optionally used a priori probability density takes into account information about the population of discovered asteroids. The a priori probability density is used to constrain the phase space of possible orbits.As a MCMC method the Metropolis-Hastings algorithm has been applied, adding a globally convergent coefficient. The sequence of possible orbits derives through the sampling of each orbital parameter and acceptance criteria.The method allows to determine the phase space of every possible orbit considering each parameter. It also can be used to derive one orbit with the biggest probability density of orbital elements.
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
A Statistical Approach to Provide Individualized Privacy for Surveys
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M.
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures. PMID:26824758
Statistical Approaches for Estimating Actinobacterial Diversity in Marine Sediments
Stach, James E. M.; Maldonado, Luis A.; Masson, Douglas G.; Ward, Alan C.; Goodfellow, Michael; Bull, Alan T.
2003-01-01
Bacterial diversity in a deep-sea sediment was investigated by constructing actinobacterium-specific 16S ribosomal DNA (rDNA) clone libraries from sediment sections taken 5 to 12, 15 to 18, and 43 to 46 cm below the sea floor at a depth of 3,814 m. Clones were placed into operational taxonomic unit (OTU) groups with ≥99% 16S rDNA sequence similarity; the cutoff value for an OTU was derived by comparing 16S rRNA homology with DNA-DNA reassociation values for members of the class Actinobacteria. Diversity statistics were used to determine how the level of dominance, species richness, and genetic diversity varied with sediment depth. The reciprocal of Simpson's index (1/D) indicated that the pattern of diversity shifted toward dominance from uniformity with increasing sediment depth. Nonparametric estimation of the species richness in the 5- to 12-, 15- to 18-, and 43- to 46-cm sediment sections revealed a trend of decreasing species number with depth, 1,406, 308, and 212 OTUs, respectively. Application of the LIBSHUFF program indicated that the 5- to 12-cm clone library was composed of OTUs significantly (P = 0.001) different from those of the 15- to 18- and 43- to 46-cm libraries. FST and phylogenetic grouping of taxa (P tests) were both significant (P < 0.00001 and P < 0.001, respectively), indicating that genetic diversity decreased with sediment depth and that each sediment community harbored unique phylogenetic lineages. It was also shown that even nonconservative OTU definitions result in severe underestimation of species richness; unique phylogenetic clades detected in one OTU group suggest that OTUs do not correspond to real ecological groups sensu Palys (T. Palys, L. K. Nakamura, and F. M. Cohan, Int. J. Syst. Bacteriol. 47:1145-1156, 1997). Mechanisms responsible for diversity and their implications are discussed. PMID:14532080
Evaluation of current statistical approaches for predictive geomorphological mapping
NASA Astrophysics Data System (ADS)
Miska, Luoto; Jan, Hjort
2005-04-01
Predictive models are increasingly used in geomorphology, but systematic evaluations of novel statistical techniques are still limited. The aim of this study was to compare the accuracy of generalized linear models (GLM), generalized additive models (GAM), classification tree analysis (CTA), neural networks (ANN) and multiple adaptive regression splines (MARS) in predictive geomorphological modelling. Five different distribution models both for non-sorted and sorted patterned ground were constructed on the basis of four terrain parameters and four soil variables. To evaluate the models, the original data set of 9997 squares of 1 ha in size was randomly divided into model training (70%, n=6998) and model evaluation sets (30%, n=2999). In general, active sorted patterned ground is clearly defined in upper fell areas with high slope angle and till soils. Active non-sorted patterned ground is more common in valleys with higher soil moisture and fine-scale concave topography. The predictive performance of each model was evaluated using the area under the receiver operating characteristic curve (AUC) and the Kappa value. The relatively high discrimination capacity of all models, AUC=0.85 0.88 and Kappa=0.49 0.56, implies that the model's predictions provide an acceptable index of sorted and non-sorted patterned ground occurrence. The best performance for model calibration data for both data sets was achieved by the CTA. However, when the predictive mapping ability was explored through the evaluation data set, the model accuracies of CTA decreased clearly compared to the other modelling techniques. For model evaluation data MARS performed marginally best. Our results show that the digital elevation model and soil data can be used to predict relatively robustly the activity of patterned ground in fine scale in a subarctic landscape. This indicates that predictive geomorphological modelling has the advantage of providing relevant and useful information on earth surface
Statistical physics approaches to quantifying sleep-stage transitions
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan
Sleep can be viewed as a sequence of transitions in a very complex neuronal system. Traditionally, studies of the dynamics of sleep control have focused on the circadian rhythm of sleep-wake transitions or on the ultradian rhythm of the sleep cycle. However, very little is known about the mechanisms responsible for the time structure or even the statistics of the rapid sleep-stage transitions that appear without periodicity. I study the time dynamics of sleep-wake transitions for different species, including humans, rats, and mice, and find that the wake and sleep episodes exhibit completely different behaviors: the durations of wake episodes are characterized by a scale-free power-law distribution, while the durations of sleep episodes have an exponential distribution with a characteristic time scale. The functional forms of the distributions of the sleep and wake durations hold for human subjects of different ages and for subjects with sleep apnea. They also hold for all the species I investigate. Surprisingly, all species have the same power-law exponent for the distribution of wake durations, but the exponential characteristic time of the distribution of sleep durations changes across species. I develop a stochastic model which accurately reproduces our empirical findings. The model suggests that the difference between the dynamics of the sleep and wake states arises from the constraints on the number of microstates in the sleep-wake system. I develop a measure of asymmetry in sleep-stage transitions using a transition probability matrix. I find that both normal and sleep apnea subjects are characterized by two types of asymmetric sleep-stage transition paths, and that the sleep apnea group exhibits less asymmetry in the sleep-stage transitions.
A Nonequilibrium Statistical Thermodynamics Approach to Non-Gaussian Statistics in Space Plasmas.
NASA Astrophysics Data System (ADS)
Consolini, G.
2005-12-01
One of the most interesting aspect of magnetic field and plasma parameters fluctuations is the non-Gaussian shape of the Probability Distribution Functions (PDFs). This fact along with the occurrence of scaling features has been read as an evidence of intermittency. In the past, several models have been proposed for the non-gaussianity of the PDFs (Castaign et al., 1990; Frisch, 1996; Frisch & Sornette, 1997; Arimitsu & Arimitsu, 2000; Beck, 2000; Leubner & Vörös, 2005). Recently, by introducing the concept of randomized operational temperature Beck & Cohen proposed the concept of superstatistics (Beck & Cohen, 2003; Beck, 2004) as the origin of non-Gaussian PDFs in nonequilibrium, long-range correlated, systems. Here, the origin of non-Gaussian PDFs in space plasmas is discussed in the framework of composite thermodynamic systems starting from the idea of randomized operational temperature and using the concept of Lèvy transformation. This approach is motivated by recent theoretical and experimental evidences of multiscale magnetic and plasma structures in space plasmas (Chang, 1999; Chang et al, 2004). A novel shape of the small-scale PDFs is derived and compared with PDFs computed by magnetic field measurements in space plasmas.
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
ERIC Educational Resources Information Center
Perrett, Jamis J.
2012-01-01
This article demonstrates how textbooks differ in their description of the term "experimental unit". Advanced Placement Statistics teachers and students are often limited in their statistical knowledge by the information presented in their classroom textbook. Definitions and descriptions differ among textbooks as well as among different editions…
Statistical Physics Approaches to Respiratory Dynamics and Lung Structure
NASA Astrophysics Data System (ADS)
Suki, Bela
2004-03-01
The lung consists of a branching airway tree embedded in viscoelastic tissue and provides life-sustaining gas exchange to the body. In diseases, its structure is damaged and its function is compromised. We review two recent works about lung structure and dynamics and how they change in disease. 1) We introduced a new acoustic imaging approach to study airway structure. When airways in a collapsed lung are inflated, they pop open in avalanches. A single opening emits a sound package called crackle consisting of an initial spike (s) followed by ringing. The distribution n(s) of s follows a power law and the exponent of n(s) can be used to calculate the diameter ratio d defined as the ratio of the diameters of an airway to that of its parent averaged over all bifurcations. To test this method, we measured crackles in dogs, rabbits, rats and mice by inflating collapsed isolated lungs with air or helium while recording crackles with a microphone. In each species, n(s) follows a power law with an exponent that depends on species, but not on gas in agreement with theory. Values of d from crackles compare well with those calculated from morphometric data suggesting that this approach is suitable to study airway structure in disease. 2) Using novel experiments and computer models, we studied pulmonary emphysema which is caused by cigarette smoking. In emphysema, the elastic protein fibers of the tissue are actively remodeled by lung cells due to the chemicals present in smoke. We measured the mechanical properties of tissue sheets from normal and emphysematous lungs and imaged its structure which appears as a heterogeneous hexagonal network of fibers. We found evidence that during uniaxial stretching, the collagen and elastin fibers in emphysematous tissue can fail at a critical stress generating holes of various sizes (h). We developed network models of the failure process. When the failure is governed by mechanical forces, the distribution n(h) of h is a power law which
Biorefinery approach for coconut oil valorisation: a statistical study.
Bouaid, Abderrahim; Martínez, Mercedes; Aracil, José
2010-06-01
The biorefinery approach, consisting in transesterification using methanol and potassium hydroxide as catalyst, has been used to assess coconut oil valorisation. Due to the fatty acid composition of coconut oil, low (LMWME) and high (HMWME) molecular weight fatty acid methyl esters were obtained. Methyl laurate (78.30 wt.%) is the major component of the low molecular weight fraction. The influence of variables such as temperature and catalyst concentration on the production of both fractions has been studied and optimized by means of factorial design and response surface methodology (RSM). Two separate optimum conditions were found to be a catalyst concentration of 0.9% and 1% and an operation temperature of 42.5 degrees C and 57 degrees C for LMWME and HMWME, respectively, obtaining conversion rates of 77.54% and 25.41%. The valuable components of LMWME may be recovered for sale as biolubricants or biosolvents, the remaining fraction could be used as biodiesel, matching the corresponding European Standard. PMID:20129777
Jensen-Feynman approach to the statistics of interacting electrons
Pain, Jean-Christophe; Gilleron, Franck; Faussurier, Gerald
2009-08-15
Faussurier et al. [Phys. Rev. E 65, 016403 (2001)] proposed to use a variational principle relying on Jensen-Feynman (or Gibbs-Bogoliubov) inequality in order to optimize the accounting for two-particle interactions in the calculation of canonical partition functions. It consists of a decomposition into a reference electron system and a first-order correction. The procedure appears to be very efficient in order to evaluate the free energy and the orbital populations. In this work, we present numerical applications of the method and propose to extend it using a reference energy which includes the interaction between two electrons inside a given orbital. This is possible, thanks to our efficient recursion relation for the calculation of partition functions. We also show that a linear reference energy, however, is usually sufficient to achieve a good precision and that the most promising way to improve the approach of Faussurier et al. is to apply Jensen's inequality to a more convenient convex function.
Predicting major element mineral/melt equilibria - A statistical approach
NASA Technical Reports Server (NTRS)
Hostetler, C. J.; Drake, M. J.
1980-01-01
Empirical equations have been developed for calculating the mole fractions of NaO0.5, MgO, AlO1.5, SiO2, KO0.5, CaO, TiO2, and FeO in a solid phase of initially unknown identity given only the composition of the coexisting silicate melt. The approach involves a linear multivariate regression analysis in which solid composition is expressed as a Taylor series expansion of the liquid compositions. An internally consistent precision of approximately 0.94 is obtained, that is, the nature of the liquidus phase in the input data set can be correctly predicted for approximately 94% of the entries. The composition of the liquidus phase may be calculated to better than 5 mol % absolute. An important feature of this 'generalized solid' model is its reversibility; that is, the dependent and independent variables in the linear multivariate regression may be inverted to permit prediction of the composition of a silicate liquid produced by equilibrium partial melting of a polymineralic source assemblage.
Whole-genome CNV analysis: advances in computational approaches
Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.
2015-01-01
Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519
P37: Locally advanced thymoma-robotic approach
Asaf, Belal B.; Kumar, Arvind
2015-01-01
Background The conventional approach to locally advanced thymoma has been via a sternotomy. VATS and robotic thymectomies have been described but typically are reserved for patients with myasthenia gravis only or for small, encapsulated thymic tumors. There have been few reports of minimally invasive resection of locally advanced thymomas. Our objective is to present a case in which a large, locally advanced thymoma was resected en bloc with the pericardium employing robotic assisted thoracoscopic approach. Methods This case illustrates a case of an asymptomatic 29-year-old female found to have an 11 cm anterior mediastinal mass on CT scan. A right-sided, 4 port robotic approach was utilized with the camera port in the 5th intercostal space anterior axillary line and two accessory ports for robotic arm 1 and 2 in the 3rd intercostal space anterior axillary line and 8th intercostal space anterior axillary line. A 5 mm port was used between the camera and 2nd robotic arm for assistance. On exploration the mass was found to be adherent to the pericardium that was resected en bloc via anterior pericardiectomy. Her post-operative course was uncomplicated, and she was discharged home on postoperative day 1. Results Final pathology revealed an 11 cm × 7.5 cm × 3.0 cm WHO class B2 thymoma invading the pericardium, TNM stage T3N0M0, with negative margins. The patient was subsequently sent to receive 5,040 cGy of adjuvant radiation, and follow-up CT scan 6 months postoperatively showed no evidence of disease. Conclusions Very little data exist demonstrating the efficacy of resecting locally advanced thymomas utilising the minimally invasive approach. Our case demonstrates that a robotic assisted thoracoscopic approach is feasible for performing thymectomy for locally advanced thymomas. This may help limit the morbidity of a trans-sternal approach while achieving comparable oncologic results. However, further studies are needed to evaluate its efficacy and long term
Advanced statistical process control: controlling sub-0.18-μm lithography and other processes
NASA Astrophysics Data System (ADS)
Zeidler, Amit; Veenstra, Klaas-Jelle; Zavecz, Terrence E.
2001-08-01
access of the analysis to include the external variables involved in CMP, deposition etc. We then applied yield analysis methods to identify the significant lithography-external process variables from the history of lots, subsequently adding the identified process variable to the signatures database and to the PPC calculations. With these improvements, the authors anticipate a 50% improvement of the process window. This improvement results in a significant reduction of rework and improved yield depending on process demands and equipment configuration. A statistical theory that explains the PPC is then presented. This theory can be used to simulate a general PPC application. In conclusion, the PPC concept is not lithography or semiconductors limited. In fact it is applicable for any production process that is signature biased (chemical industry, car industry, .). Requirements for the PPC are large data collection, a controllable process that is not too expensive to tune the process for every lot, and the ability to employ feedback calculations. PPC is a major change in the process management approach and therefor will first be employed where the need is high and the return on investment is very fast. The best industry to start with is the semiconductors and the most likely process area to start with is lithography.
Papa, Lesther A.; Litson, Kaylee; Lockhart, Ginger; Chassin, Laurie; Geiser, Christian
2015-01-01
Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI) data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454) is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. The new approach allows for a more comprehensive and effective use of MI data when testing mediation models. PMID:26617536
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Dual-band, infrared buried mine detection using a statistical pattern recognition approach
Buhl, M.R.; Hernandez, J.E.; Clark, G.A.; Sengupta, S.K.
1993-08-01
The main objective of this work was to detect surrogate land mines, which were buried in clay and sand, using dual-band, infrared images. A statistical pattern recognition approach was used to achieve this objective. This approach is discussed and results of applying it to real images are given.
Advanced Stirling Convertor Dynamic Test Approach and Results
NASA Technical Reports Server (NTRS)
Meer, David W.; Hill, Dennis; Ursic, Joseph J.
2010-01-01
The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. This sequence includes testing at workmanship and flight acceptance levels interspersed with periods of extended operation to simulate prefueling and post fueling. The final step in the test sequence utilizes additional testing at flight acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit (ASRG EU) at LM. This paper outlines the overall test approach, summarizes the test results from the ASRG EU, describes the incorporation of those results into the test approach, and presents the results of applying the test approach to the ASC-1 #3 and #4 convertors. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.
NASA Astrophysics Data System (ADS)
Stefani, Jerry A.; Poarch, Scott; Saxena, Sharad; Mozumder, P. K.
1994-09-01
An advanced multivariable off-line process control system, which combines traditional Statistical Process Control (SPC) with feedback control, has been applied to the CVD tungsten process on an Applied Materials Centura reactor. The goal of the model-based controller is to compensate for shifts in the process and maintain the wafer state responses on target. In the present application the controller employs measurements made on test wafers by off-line metrology tools to track the process behavior. This is accomplished by using model- bases SPC, which compares the measurements with predictions obtained from empirically-derived process models. For CVD tungsten, a physically-based modeling approach was employed based on the kinetically-limited H2 reduction of WF6. On detecting a statistically significant shift in the process, the controller calculates adjustments to the settings to bring the process responses back on target. To achieve this a few additional test wafers are processed at slightly different settings than the nominal. This local experiment allows the models to be updated to reflect the current process performance. The model updates are expressed as multiplicative or additive changes in the process inputs and a change in the model constant. This approach for model updating not only tracks the present process/equipment state, but it also provides some diagnostic capability regarding the cause of the process shift. The updated models are used by an optimizer to compute new settings to bring the responses back to target. The optimizer is capable of incrementally entering controllables into the strategy, reflecting the degree to which the engineer desires to manipulates each setting. The capability of the controller to compensate for shifts in the CVD tungsten process has been demonstrated. Targets for film bulk resistivity and deposition rate were maintained while satisfying constraints on film stress and WF6 conversion efficiency.
Unal, Cetin; Pasamehmetoglu, Kemal; Carmack, Jon
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Rcactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems is critical. In order to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. The purpose of this paper is to identify the modeling and simulation approach in order to deliver predictive tools for advanced fuels development. The coordination between experimental nuclear fuel design, development technical experts, and computational fuel modeling and simulation technical experts is a critical aspect of the approach and naturally leads to an integrated, goal-oriented science-based R & D approach and strengthens both the experimental and computational efforts. The Advanced Fuels Campaign (AFC) and Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Integrated Performance and Safety Code (IPSC) are working together to determine experimental data and modeling needs. The primary objective of the NEAMS fuels IPSC project is to deliver a coupled, three-dimensional, predictive computational platform for modeling the fabrication and both normal and abnormal operation of nuclear fuel pins and assemblies, applicable to both existing and future reactor fuel designs. The science based program is pursuing the development of an integrated multi-scale and multi-physics modeling and simulation platform for nuclear fuels. This overview paper discusses the vision, goals and approaches how to develop and implement the new approach.
NASA Astrophysics Data System (ADS)
Andronov, I. L.; Chinarova, L. L.; Kudashkina, L. S.; Marsakova, V. I.; Tkachenko, M. G.
2016-06-01
We have elaborated a set of new algorithms and programs for advanced time series analysis of (generally) multi-component multi-channel observations with irregularly spaced times of observations, which is a common case for large photometric surveys. Previous self-review on these methods for periodogram, scalegram, wavelet, autocorrelation analysis as well as on "running" or "sub-interval" local approximations were self-reviewed in (2003ASPC..292..391A). For an approximation of the phase light curves of nearly-periodic pulsating stars, we use a Trigonometric Polynomial (TP) fit of the statistically optimal degree and initial period improvement using differential corrections (1994OAP.....7...49A). For the determination of parameters of "characteristic points" (minima, maxima, crossings of some constant value etc.) we use a set of methods self-reviewed in 2005ASPC..335...37A, Results of the analysis of the catalogs compiled using these programs are presented in 2014AASP....4....3A. For more complicated signals, we use "phenomenological approximations" with "special shapes" based on functions defined on sub-intervals rather on the complete interval. E. g. for the Algol-type stars we developed the NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv1212.6707A, 2015JASS...32..127A), which was compared to common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree. The method allows determine the minimal set of parameters required for the "General Catalogue of Variable Stars", as well as an extended set of phenomenological and astrophysical parameters which may be used for the classification. Totally more that 1900 variable stars were studied in our group using these methods in a frame of the "Inter-Longitude Astronomy" campaign (2010OAP....23....8A) and the "Ukrainian Virtual Observatory" project (2012KPCB...28...85V).
ERIC Educational Resources Information Center
Potter, James Thomson, III
2012-01-01
Research into teaching practices and strategies has been performed separately in AP Statistics and in K-12 online learning (Garfield, 2002; Ferdig, DiPietro, Black & Dawson, 2009). This study seeks combine the two and build on the need for more investigation into online teaching and learning in specific content (Ferdig et al, 2009; DiPietro,…
"I am Not a Statistic": Identities of African American Males in Advanced Science Courses
NASA Astrophysics Data System (ADS)
Johnson, Diane Wynn
The United States Bureau of Labor Statistics (2010) expects new industries to generate approximately 2.7 million jobs in science and technology by the year 2018, and there is concern as to whether there will be enough trained individuals to fill these positions. A tremendous resource remains untapped, African American students, especially African American males (National Science Foundation, 2009). Historically, African American males have been omitted from the so called science pipeline. Fewer African American males pursue a science discipline due, in part; to limiting factors they experience in school and at home (Ogbu, 2004). This is a case study of African American males who are enrolled in advanced science courses at a predominantly African American (84%) urban high school. Guided by expectancy-value theory (EVT) of achievement related results (Eccles, 2009; Eccles et al., 1983), twelve African American male students in two advanced science courses were observed in their science classrooms weekly, participated in an in-depth interview, developed a presentation to share with students enrolled in a tenth grade science course, responded to an open-ended identity questionnaire, and were surveyed about their perceptions of school. Additionally, the students' teachers were interviewed, and seven of the students' parents. The interview data analyses highlighted the important role of supportive parents (key socializers) who had high expectations for their sons and who pushed them academically. The students clearly attributed their enrollment in advanced science courses to their high regard for their science teachers, which included positive relationships, hands-on learning in class, and an inviting and encouraging learning environment. Additionally, other family members and coaches played important roles in these young men's lives. Students' PowerPoint(c) presentations to younger high school students on why they should take advanced science courses highlighted these
Classification of human colonic tissues using FTIR spectra and advanced statistical techniques
NASA Astrophysics Data System (ADS)
Zwielly, A.; Argov, S.; Salman, A.; Bogomolny, E.; Mordechai, S.
2010-04-01
One of the major public health hazards is colon cancer. There is a great necessity to develop new methods for early detection of cancer. If colon cancer is detected and treated early, cure rate of more than 90% can be achieved. In this study we used FTIR microscopy (MSP), which has shown a good potential in the last 20 years in the fields of medical diagnostic and early detection of abnormal tissues. Large database of FTIR microscopic spectra was acquired from 230 human colonic biopsies. Five different subgroups were included in our database, normal and cancer tissues as well as three stages of benign colonic polyps, namely, mild, moderate and severe polyps which are precursors of carcinoma. In this study we applied advanced mathematical and statistical techniques including principal component analysis (PCA) and linear discriminant analysis (LDA), on human colonic FTIR spectra in order to differentiate among the mentioned subgroups' tissues. Good classification accuracy between normal, polyps and cancer groups was achieved with approximately 85% success rate. Our results showed that there is a great potential of developing FTIR-micro spectroscopy as a simple, reagent-free viable tool for early detection of colon cancer in particular the early stages of premalignancy among the benign colonic polyps.
A Novel Statistical Approach for Brain MR Images Segmentation Based on Relaxation Times
Ferraioli, Giampaolo; Pascazio, Vito
2015-01-01
Brain tissue segmentation in Magnetic Resonance Imaging is useful for a wide range of applications. Classical approaches exploit the gray levels image and implement criteria for differentiating regions. Within this paper a novel approach for brain tissue joint segmentation and classification is presented. Starting from the estimation of proton density and relaxation times, we propose a novel method for identifying the optimal decision regions. The approach exploits the statistical distribution of the involved signals in the complex domain. The technique, compared to classical threshold based ones, is able to globally improve the classification rate. The effectiveness of the approach is evaluated on both simulated and real datasets. PMID:26798631
The Advanced Statistical Trajectory Regional Air Pollution (ASTRAP) model simulates long-term transport and deposition of oxides of and nitrogen. t is a potential screening tool for assessing long-term effects on regional visibility from sulfur emission sources. owever, a rigorou...
ERIC Educational Resources Information Center
McCarthy, Christopher J.; Lambert, Richard G.; Crowe, Elizabeth W.; McCarthy, Colleen J.
2010-01-01
This study examined the relationship of teachers' perceptions of coping resources and demands to job satisfaction factors. Participants were 158 Advanced Placement Statistics high school teachers who completed measures of personal resources for stress prevention, classroom demands and resources, job satisfaction, and intention to leave the field…
ERIC Educational Resources Information Center
Averitt, Sallie D.
This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills in working with line graphs and teaching…
"I am Not a Statistic": Identities of African American Males in Advanced Science Courses
NASA Astrophysics Data System (ADS)
Johnson, Diane Wynn
The United States Bureau of Labor Statistics (2010) expects new industries to generate approximately 2.7 million jobs in science and technology by the year 2018, and there is concern as to whether there will be enough trained individuals to fill these positions. A tremendous resource remains untapped, African American students, especially African American males (National Science Foundation, 2009). Historically, African American males have been omitted from the so called science pipeline. Fewer African American males pursue a science discipline due, in part; to limiting factors they experience in school and at home (Ogbu, 2004). This is a case study of African American males who are enrolled in advanced science courses at a predominantly African American (84%) urban high school. Guided by expectancy-value theory (EVT) of achievement related results (Eccles, 2009; Eccles et al., 1983), twelve African American male students in two advanced science courses were observed in their science classrooms weekly, participated in an in-depth interview, developed a presentation to share with students enrolled in a tenth grade science course, responded to an open-ended identity questionnaire, and were surveyed about their perceptions of school. Additionally, the students' teachers were interviewed, and seven of the students' parents. The interview data analyses highlighted the important role of supportive parents (key socializers) who had high expectations for their sons and who pushed them academically. The students clearly attributed their enrollment in advanced science courses to their high regard for their science teachers, which included positive relationships, hands-on learning in class, and an inviting and encouraging learning environment. Additionally, other family members and coaches played important roles in these young men's lives. Students' PowerPoint(c) presentations to younger high school students on why they should take advanced science courses highlighted these
How large is the gluon polarization in the statistical parton distributions approach?
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-04-10
We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.
How large is the gluon polarization in the statistical parton distributions approach?
NASA Astrophysics Data System (ADS)
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-04-01
We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.
A Statistical Filtering Approach for Gravity Recovery and Climate Experiment (GRACE) Gravity Data
NASA Technical Reports Server (NTRS)
Davis. J. L.; Tamisiea, M. E.; Elosegui, P.; Mitrovica, J. X.; Hill, E. M.
2008-01-01
We describe and analyze a statistical filtering approach for GRACE data that uses a parametrized model for the temporal evolution of the GRACE coefficients. After least-squares adjustment, a statistical test is performed to assess the significance of the estimated parameters. If the test is passed, the parameters are used by the filter in the reconstruction of the field; otherwise they are rejected. The test is performed, and the filter is formed, separately for annual components of the model and the trend. This new approach is distinct from Gaussian smoothing since it uses the data themselves to test for specific components of the time-varying gravity field. The statistical filter appears inherently to remove most of the "stripes" present in the GRACE fields, although destriping the fields prior to filtering seems to help the trend recovery. We demonstrate that the statistical filter produces reasonable maps for the annual components and trend. We furthermore assess the statistical filter for the annual components using ground-based GPS data in South America by assuming that the annual component of the gravity signal is associated only with groundwater storage. The un-destriped, statistically filtered field has a X2 value relative to the GPS data consistent with the best result from smoothing. In the space domain, the statistical filters are qualitatively similar to Gaussian smoothing. Unlike Gaussian smoothing, however, the statistical filter has significant sidelobes, including large negative sidelobes on the north-south axis, potentially revealing information on the errors, and the correlations among the errors, for the GRACE coefficients.
Simulating advanced life support systems to test integrated control approaches
NASA Astrophysics Data System (ADS)
Kortenkamp, D.; Bell, S.
Simulations allow for testing of life support control approaches before hardware is designed and built. Simulations also allow for the safe exploration of alternative control strategies during life support operation. As such, they are an important component of any life support research program and testbed. This paper describes a specific advanced life support simulation being created at NASA Johnson Space Center. It is a discrete-event simulation that is dynamic and stochastic. It simulates all major components of an advanced life support system, including crew (with variable ages, weights and genders), biomass production (with scalable plantings of ten different crops), water recovery, air revitalization, food processing, solid waste recycling and energy production. Each component is modeled as a producer of certain resources and a consumer of certain resources. The control system must monitor (via sensors) and control (via actuators) the flow of resources throughout the system to provide life support functionality. The simulation is written in an object-oriented paradigm that makes it portable, extensible and reconfigurable.
Reliability Demonstration Approach for Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Ha, CHuong; Zampino, Edward; Penswick, Barry; Spronz, Michael
2010-01-01
Developed for future space missions as a high-efficiency power system, the Advanced Stirling Radioisotope Generator (ASRG) has a design life requirement of 14 yr in space following a potential storage of 3 yr after fueling. In general, the demonstration of long-life dynamic systems remains difficult in part due to the perception that the wearout of moving parts cannot be minimized, and associated failures are unpredictable. This paper shows a combination of systematic analytical methods, extensive experience gained from technology development, and well-planned tests can be used to ensure a high level reliability of ASRG. With this approach, all potential risks from each life phase of the system are evaluated and the mitigation adequately addressed. This paper also provides a summary of important test results obtained to date for ASRG and the planned effort for system-level extended operation.
A Novel Approach to Material Development for Advanced Reactor Systems
Was, G.S.; Atzmon, M.; Wang, L.
1999-12-22
OAK B188 A Novel Approach to Material Development for Advanced Reactor Systems. Year one of this project had three major goals. First, to specify, order and install a new high current ion source for more rapid and stable proton irradiation. Second, to assess the use low temperature irradiation and chromium pre-enrichment in an effort to isolate a radiation damage microstructure in stainless steels without the effects of RIS. Third, to prepare for the irradiation of reactor pressure vessel steel and Zircaloy. In year 1 quarter 1, the project goal was to order the high current ion source and to procure and prepare samples of stainless steel for low temperature proton irradiation.
A Novel Approach to Material Development for Advanced Reactor Systems
Was, G.S.; Atzmon, M.; Wang, L.
2000-06-27
OAK B188 A Novel Approach to Material Development for Advanced Reactor Systems. Year one of this project had three major goals. First, to specify, order and install a new high current ion source for more rapid and stable proton irradiation. Second, to assess the use of low temperature irradiation and chromium pre-enrichment in an effort to isolate a radiation damage microstructure in stainless steel without the effects of RIS. Third, to initiate irradiation of reactor pressure vessel steel and Zircaloy. In year 1 quarter 3, the project goal was to complete irradiation of model alloys of RPV steels for a range of doses and begin sample characterization. We also planned to prepare samples for microstructure isolation in stainless steels, and to identify sources of Zircaloy for irradiation and characterization.
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
A Statistical Approach to Identifying Compact Objects in X-ray Binaries
NASA Astrophysics Data System (ADS)
Vrtilek, Saeqa D.
2013-04-01
A standard approach towards statistical inferences in astronomy has been the application of Principal Components Analysis (PCA) to reduce dimensionality. However, for non-linear distributions this is not always an effective approach. A non-linear technique called ``diffusion maps" (Freema \\eta 2009; Richard \\eta 2009; Lee \\& Waterman 2010), a robust eigenmode-based framework, allows retention of the full ``connectivity" of the data points. Through this approach we define the highly non-linear geometry of X-ray binaries in a color-color-intensity diagram in an efficient and statistically sound manner providing a broadly applicable means of distinguishing between black holes and neutron stars in Galactic X-ray binaries.
Murari, A; Gelfusa, M; Peluso, E; Gaudio, P; Mazon, D; Hawkes, N; Point, G; Alper, B; Eich, T
2014-12-01
In a Tokamak the configuration of the magnetic fields remains the key element to improve performance and to maximise the scientific exploitation of the device. On the other hand, the quality of the reconstructed fields depends crucially on the measurements available. Traditionally in the least square minimisation phase of the algorithms, used to obtain the magnetic field topology, all the diagnostics are given the same weights, a part from a corrective factor taking into account the error bars. This assumption unduly penalises complex diagnostics, such as polarimetry, which have a limited number of highly significant measurements. A completely new method to choose the weights, to be given to the internal measurements of the magnetic fields for improved equilibrium reconstructions, is presented in this paper. The approach is based on various statistical indicators applied to the residuals, the difference between the actual measurements and their estimates from the reconstructed equilibrium. The potential of the method is exemplified using the measurements of the Faraday rotation derived from JET polarimeter. The results indicate quite clearly that the weights have to be determined carefully, since the inappropriate choice can have significant repercussions on the quality of the magnetic reconstruction both in the edge and in the core. These results confirm the limitations of the assumption that all the diagnostics have to be given the same weight, irrespective of the number of measurements they provide and the region of the plasma they probe. PMID:25554293
NASA Astrophysics Data System (ADS)
Murari, A.; Gelfusa, M.; Peluso, E.; Gaudio, P.; Mazon, D.; Hawkes, N.; Point, G.; Alper, B.; Eich, T.
2014-12-01
In a Tokamak the configuration of the magnetic fields remains the key element to improve performance and to maximise the scientific exploitation of the device. On the other hand, the quality of the reconstructed fields depends crucially on the measurements available. Traditionally in the least square minimisation phase of the algorithms, used to obtain the magnetic field topology, all the diagnostics are given the same weights, a part from a corrective factor taking into account the error bars. This assumption unduly penalises complex diagnostics, such as polarimetry, which have a limited number of highly significant measurements. A completely new method to choose the weights, to be given to the internal measurements of the magnetic fields for improved equilibrium reconstructions, is presented in this paper. The approach is based on various statistical indicators applied to the residuals, the difference between the actual measurements and their estimates from the reconstructed equilibrium. The potential of the method is exemplified using the measurements of the Faraday rotation derived from JET polarimeter. The results indicate quite clearly that the weights have to be determined carefully, since the inappropriate choice can have significant repercussions on the quality of the magnetic reconstruction both in the edge and in the core. These results confirm the limitations of the assumption that all the diagnostics have to be given the same weight, irrespective of the number of measurements they provide and the region of the plasma they probe.
Lung volume reduction for advanced emphysema: surgical and bronchoscopic approaches.
Tidwell, Sherry L; Westfall, Elizabeth; Dransfield, Mark T
2012-01-01
Chronic obstructive pulmonary disease is the third leading cause of death in the United States, affecting more than 24 million people. Inhaled bronchodilators are the mainstay of therapy; they improve symptoms and quality of life and reduce exacerbations. These and smoking cessation and long-term oxygen therapy for hypoxemic patients are the only medical treatments definitively demonstrated to reduce mortality. Surgical approaches include lung transplantation and lung volume reduction and the latter has been shown to improve exercise tolerance, quality of life, and survival in highly selected patients with advanced emphysema. Lung volume reduction surgery results in clinical benefits. The procedure is associated with a short-term risk of mortality and a more significant risk of cardiac and pulmonary perioperative complications. Interest has been growing in the use of noninvasive, bronchoscopic methods to address the pathological hyperinflation that drives the dyspnea and exercise intolerance that is characteristic of emphysema. In this review, the mechanism by which lung volume reduction improves pulmonary function is outlined, along with the risks and benefits of the traditional surgical approach. In addition, the emerging bronchoscopic techniques for lung volume reduction are introduced and recent clinical trials examining their efficacy are summarized. PMID:22189668
NASA Astrophysics Data System (ADS)
Bourrely, Claude; Buccella, Franco; Soffer, Jacques
2011-04-01
We consider the extension of the statistical parton distributions to include their transverse momentum dependence, by using two different methods, one is based on our quantum statistical approach, the other on a relativistic covariant method. We take into account the effects of the Melosh-Wigner rotation for the polarized distributions. The results obtained can be compared with recent semi-inclusive deep inelastic scattering (DIS) data on the cross section and double longitudinal-spin asymmetries from JLab. We also give some predictions for future experiments on electron-neutron scattering.
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants.
Broadaway, K Alaine; Cutler, David J; Duncan, Richard; Moore, Jacob L; Ware, Erin B; Jhun, Min A; Bielak, Lawrence F; Zhao, Wei; Smith, Jennifer A; Peyser, Patricia A; Kardia, Sharon L R; Ghosh, Debashis; Epstein, Michael P
2016-03-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
MacKinnon, David P.; Pirlott, Angela G.
2016-01-01
Statistical mediation methods provide valuable information about underlying mediating psychological processes, but the ability to infer that the mediator variable causes the outcome variable is more complex than widely known. Researchers have recently emphasized how violating assumptions about confounder bias severely limits causal inference of the mediator to dependent variable relation. Our article describes and addresses these limitations by drawing on new statistical developments in causal mediation analysis. We first review the assumptions underlying causal inference and discuss three ways to examine the effects of confounder bias when assumptions are violated. We then describe four approaches to address the influence of confounding variables and enhance causal inference, including comprehensive structural equation models, instrumental variable methods, principal stratification, and inverse probability weighting. Our goal is to further the adoption of statistical methods to enhance causal inference in mediation studies. PMID:25063043
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
Pulsipher, B.A.; Kuhn, W.L.
1987-02-01
Current planning for liquid high-level nuclear wastes existing in the US includes processing in a liquid-fed ceramic melter to incorporate it into a high-quality glass, and placement in a deep geologic repository. The nuclear waste vitrification process requires assurance of a quality product with little or no final inspection. Statistical process control (SPC) is a quantitative approach to one quality assurance aspect of vitrified nuclear waste. This method for monitoring and controlling a process in the presence of uncertainties provides a statistical basis for decisions concerning product quality improvement. Statistical process control is shown to be a feasible and beneficial tool to help the waste glass producers demonstrate that the vitrification process can be controlled sufficiently to produce an acceptable product. This quantitative aspect of quality assurance could be an effective means of establishing confidence in the claims to a quality product. 2 refs., 4 figs.
A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma
Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele
2016-01-01
Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of
Organic and inorganic nitrogen dynamics in soil - advanced Ntrace approach
NASA Astrophysics Data System (ADS)
Andresen, Louise C.; Björsne, Anna-Karin; Bodé, Samuel; Klemedtsson, Leif; Boeckx, Pascal; Rütting, Tobias
2016-04-01
Depolymerization of soil organic nitrogen (SON) into monomers (e.g. amino acids) is currently thought to be the rate limiting step for the terrestrial nitrogen (N) cycle. The production of free amino acids (AA) is followed by AA mineralization to ammonium, which is an important fraction of the total N mineralization. Accurate assessment of depolymerization and AA mineralization rate is important for a better understanding of the rate limiting steps. Recent developments in the 15N pool dilution techniques, based on 15N labelling of AA's, allow quantifying gross rates of SON depolymerization and AA mineralization (Wanek et al., 2010; Andersen et al., 2015) in addition to gross N mineralization. However, it is well known that the 15N pool dilution approach has limitations; in particular that gross rates of consumption processes (e.g. AA mineralization) are overestimated. This has consequences for evaluating the rate limiting step of the N cycle, as well as for estimating the nitrogen use efficiency (NUE). Here we present a novel 15N tracing approach, which combines 15N-AA labelling with an advanced version of the 15N tracing model Ntrace (Müller et al., 2007) explicitly accounting for AA turnover in soil. This approach (1) provides a more robust quantification of gross depolymerization and AA mineralization and (2) suggests a more realistic estimate for the microbial NUE of amino acids. Advantages of the new 15N tracing approach will be discussed and further improvements will be identified. References: Andresen, L.C., Bodé, S., Tietema, A., Boeckx, P., and Rütting, T.: Amino acid and N mineralization dynamics in heathland soil after long-term warming and repetitive drought, SOIL, 1, 341-349, 2015. Müller, C., Rütting, T., Kattge, J., Laughlin, R. J., and Stevens, R. J.: Estimation of parameters in complex 15N tracing models via Monte Carlo sampling, Soil Biology & Biochemistry, 39, 715-726, 2007. Wanek, W., Mooshammer, M., Blöchl, A., Hanreich, A., and Richter
NASA Astrophysics Data System (ADS)
Demura, A. V.; Kadomtsev, M. B.; Lisitsa, V. S.; Shurygin, V. A.
2015-06-01
The universal statistical approach for calculation of radiative and collisional processes with multielectron ions in plasmas is developed. It is based on the atomic structure representation similar to that used in a condensed medium. The distribution of local atomic electron density determines the set of elementary excitations with classical plasma frequency. The statistical method is tested by the calculations of the total electron impact single ionization cross-sections, ionization rates and radiative losses of various ions. In the coronal limit the radiative losses of heavy plasma impurities with any type of multielectron ions are determined by the excitation of collective atomic oscillations due to collisions with plasma electrons. It is shown that for low plasma densities the tungsten ions total radiative loss scatter within universal statistical approach does not exceed similar results of current complex numerical codes in the wide range of plasma temperatures. The general expression for the radiative losses in the case of the intermediate state between limiting cases of coronal and Boltzmann population distributions is derived as well. The total electron impact ionization cross-sections and ionization rates for ions of various charge stages for a wide range of elements from Ar to U are compared with experimental and conventional complex code data showing satisfactory agreement. As the universal statistical method operates in terms of collective excitations, it implicitly includes direct and indirect ionization processes.
Time series expression analyses using RNA-seq: a statistical approach.
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021
Tomlinson, Alan; Hair, Mario; McFadyen, Angus
2013-10-01
Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. PMID:24112230
Burn, K.W.
1995-01-01
The Direct Statistical Approach (DSA) to surface splitting and Russian Roulette (RR) is one of the current routes toward automatism in Monte Carlo and is currently applied to fixed source particle transport problems. A general volumetric particle bifurcation capability has been inserted into the Direct Statistical Approach (DSA) surface parameter and cell models. The resulting extended DSA describes the second moment and time functions in terms of phase-space surface splitting/Russian roulette parameters (surface parameter model) or phase-space cell importances (cell model) in the presence of volumetric particle bifurcations including both natural events [such as (n,xn) or gamma production from neutron collisions] and artificial events (such as DXTRAN). At the same time, other limitations in the DSA models (concerning tally scores direct from the source and tracks surviving an event at which a tally score occurs) are removed. Given the second moment and time functions, the foregoing surface or cell parameters may then be optimized.
Ni, Weiping; Yan, Weidong; Bian, Hui; Wu, Junzheng
2014-01-01
A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI) generation process. The new DI performs much better than the familiar log ratio (LR) DI as well as the cumulant based Kullback-Leibler divergence (CKLD) DI. The statistical region merging (SRM) approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM) method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency. PMID:25258740
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
A Statistical-Physics Approach to Language Acquisition and Language Change
NASA Astrophysics Data System (ADS)
Cassandro, Marzio; Collet, Pierre; Galves, Antonio; Galves, Charlotte
1999-02-01
The aim of this paper is to explain why Statistical Physics can help understanding two related linguistic questions. The first question is how to model first language acquisition by a child. The second question is how language change proceeds in time. Our approach is based on a Gibbsian model for the interface between syntax and prosody. We also present a simulated annealing model of language acquisition, which extends the Triggering Learning Algorithm recently introduced in the linguistic literature.
Carboni, Michele; Gianneo, Andrea; Giglio, Marco
2015-07-01
This research investigates a Lamb-wave based structural health monitoring approach matching an out-of-phase actuation of a pair of piezoceramic transducers at low frequency. The target is a typical quasi-isotropic carbon fibre reinforced polymer aeronautical laminate subjected to artificial, via Teflon patches, and natural, via suitable low velocity drop weight impact tests, delaminations. The performance and main influencing factors of such an approach are studied through a Design of Experiment statistical method, considering both Pulse Echo and Pitch Catch configurations of PZT sensors. Results show that some factors and their interactions can effectively influence the detection of a delamination-like damage. PMID:25746761
NASA Astrophysics Data System (ADS)
Peng, C.-K.; Yang, Albert C.-C.; Goldberger, Ary L.
2007-03-01
We recently proposed a novel approach to categorize information carried by symbolic sequences based on their usage of repetitive patterns. A simple quantitative index to measure the dissimilarity between two symbolic sequences can be defined. This information dissimilarity index, defined by our formula, is closely related to the Shannon entropy and rank order of the repetitive patterns in the symbolic sequences. Here we discuss the underlying statistical physics assumptions of this dissimilarity index. We use human cardiac interbeat interval time series and DNA sequences as examples to illustrate the applicability of this generic approach to real-world problems.
Advancement in contemporary diagnostic and therapeutic approaches for rheumatoid arthritis.
Kumar, L Dinesh; Karthik, R; Gayathri, N; Sivasudha, T
2016-04-01
This review is intended to provide a summary of the pathogenesis, diagnosis and therapies for rheumatoid arthritis. Rheumatoid arthritis (RA) is a common form of inflammatory autoimmune disease with unknown aetiology. Bone degradation, cartilage and synovial destruction are three major pathways of RA pathology. Sentinel cells includes dendritic cells, macrophages and mast cells bound with the auto antigens and initiate the inflammation of the joints. Those cells further activates the immune cells on synovial membrane by releasing inflammatory cytokines Interleukin 1, 6, 17, etc., Diagnosis of this disease is a combinational approach comprises radiological imaging, blood and serology markers assessment. The treatment of RA still remain inadequate due to the lack of knowledge in disease development. Non-steroidal anti-inflammatory drugs, disease modifying anti rheumatic drugs and corticosteroid are the commercial drugs to reduce pain, swelling and suppressing several disease factors. Arthroscopy will be an useful method while severe degradation of joint tissues. Gene therapy is a major advancement in RA. Suppressor gene locus of inflammatory mediators and matrix degrading enzymes were inserted into the affected area to reduce the disease progression. To overcome the issues aroused from those therapies like side effects and expenses, phytocompounds have been investigated and certain compounds are proved for their anti-arthritic potential. Furthermore certain complementary alternative therapies like yoga, acupuncture, massage therapy and tai chi have also been proved for their capability in RA treatment. PMID:27044812
NASA Astrophysics Data System (ADS)
Tsutsumi, Morito; Seya, Hajime
2009-12-01
This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
NASA Astrophysics Data System (ADS)
Wang, Lian-xing; Ju, Hua-lamg; Chem, Zhem-ming
1995-03-01
Eighty-three patients suffering from moderate or advanced malignant tumors were treated by combined chemotherapy and photodynamic therapy (PDT) in our hospital. The short term result of such management is very promising, the effectiveness seems to be nearly 100% and the general responsive rate is 79.5% (CR + PR). If compared with another group of 84 similar patients whom were treated with PDT alone, the short term efficacy is 85.7% while the general response rate is 54.7% (P < 0.01), there is a significant statistic. The better result of the combined approach is probably due to the action of the chemotherapeutic agent, potentially blocking the mitosis of the cellular cycle at certain phases of the cancer cells, making the cell membrane become more permeable to the photochemical agent, HPD, and eliciting a better cancerocidal effect.
Halpin, Peter F; Stam, Henderikus J
2006-01-01
The application of statistical testing in psychological research over the period of 1940-1960 is examined in order to address psychologists' reconciliation of the extant controversy between the Fisher and Neyman-Pearson approaches. Textbooks of psychological statistics and the psychological journal literature are reviewed to examine the presence of what Gigerenzer (1993) called a hybrid model of statistical testing. Such a model is present in the textbooks, although the mathematically incomplete character of this model precludes the appearance of a similarly hybridized approach to statistical testing in the research literature. The implications of this hybrid model for psychological research and the statistical testing controversy are discussed. PMID:17286092
A combinatorial approach to the discovery of advanced materials
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong
This thesis discusses the application of combinatorial methods to the search of advanced materials. The goal of this research is to develop a "parallel" or "fast sequential" methodology for both the synthesis and characterization of materials with novel electronic, magnetic and optical properties. Our hope is to dramatically accelerate the rate at which materials are generated and studied. We have developed two major combinatorial methodologies to this end. One involves generating thin film materials libraries using a combination of various thin film deposition and masking strategies with multi-layer thin film precursors. The second approach is to generate powder materials libraries with solution precursors delivered with a multi-nozzle inkjet system. The first step in this multistep combinatorial process involves the design and synthesis of high density libraries of diverse materials aimed at exploring a large segment of the compositional space of interest based on our understanding of the physical and structural properties of a particular class of materials. Rapid, sensitive measurements of one or more relevant physical properties of each library member result in the identification of a family of "lead" compositions with a desired property. These compositions are then optimized by continuously varying the stoichiometries of a more focused set of precursors. Materials with the optimal composition are then synthesized in quantities sufficient for detailed characterization of their structural and physical properties. Finally, the information obtained from this process should enhance our predictive ability in subsequent experiments. Combinatorial methods have been successfully used in the synthesis and discovery of materials with novel properties. For example, a class of cobaltite based giant magnetoresistance (GMR) ceramics was discovered; Application of this method to luminescence materials has resulted in the discovery of a few highly efficient tricolor
Land cover change using an energy transition paradigm in a statistical mechanics approach
NASA Astrophysics Data System (ADS)
Zachary, Daniel S.
2013-10-01
This paper explores a statistical mechanics approach as a means to better understand specific land cover changes on a continental scale. Integrated assessment models are used to calculate the impact of anthropogenic emissions via the coupling of technoeconomic and earth/atmospheric system models and they have often overlooked or oversimplified the evolution of land cover change. Different time scales and the uncertainties inherent in long term projections of land cover make their coupling to integrated assessment models difficult. The mainstream approach to land cover modelling is rule-based methodology and this necessarily implies that decision mechanisms are often removed from the physical geospatial realities, therefore a number of questions remain: How much of the predictive power of land cover change can be linked to the physical situation as opposed to social and policy realities? Can land cover change be understood using a statistical approach that includes only economic drivers and the availability of resources? In this paper, we use an energy transition paradigm as a means to predict this change. A cost function is applied to developed land covers for urban and agricultural areas. The counting of area is addressed using specific examples of a Pólya process involving Maxwell-Boltzmann and Bose-Einstein statistics. We apply an iterative counting method and compare the simulated statistics with fractional land cover data with a multi-national database. An energy level paradigm is used as a basis in a flow model for land cover change. The model is compared with tabulated land cover change in Europe for the period 1990-2000. The model post-predicts changes for each nation. When strong extraneous factors are absent, the model shows promise in reproducing data and can provide a means to test hypothesis for the standard rules-based algorithms.
NASA Astrophysics Data System (ADS)
Dhakal, Nirajan; Jain, Shaleen; Gray, Alexander; Dandy, Michael; Stancioff, Esperanza
2015-06-01
Changes in seasonality of extreme storms have important implications for public safety, storm water infrastructure, and, in general, adaptation strategies in a changing climate. While past research on this topic offers some approaches to characterize seasonality, the methods are somewhat limited in their ability to discern the diversity of distributional types for extreme precipitation dates. Herein, we present a comprehensive approach for assessment of temporal changes in the calendar dates for extreme precipitation within a circular statistics framework which entails: (a) three measures to summarize circular random variables (traditional approach), (b) four nonparametric statistical tests, and (c) a new nonparametric circular density method to provide a robust assessment of the nature of probability distribution and changes. Two 30 year blocks (1951-1980 and 1981-2010) of annual maximum daily precipitation from 10 stations across the state of Maine were used for our analysis. Assessment of seasonality based on nonparametric approach indicated nonstationarity; some stations exhibited shifts in significant mode toward Spring season for the recent time period while some other stations exhibited multimodal seasonal pattern for both the time periods. Nonparametric circular density method, used in this study, allows for an adaptive estimation of seasonal density. Despite the limitation of being sensitive to the smoothing parameter, this method can accurately characterize one or more modes of seasonal peaks, as well as pave the way toward assessment of changes in seasonality over time.
Carlsen, Michelle; Fu, Guifang; Bushman, Shaun; Corcoran, Christopher
2016-02-01
Genome-wide data with millions of single-nucleotide polymorphisms (SNPs) can be highly correlated due to linkage disequilibrium (LD). The ultrahigh dimensionality of big data brings unprecedented challenges to statistical modeling such as noise accumulation, the curse of dimensionality, computational burden, spurious correlations, and a processing and storing bottleneck. The traditional statistical approaches lose their power due to [Formula: see text] (n is the number of observations and p is the number of SNPs) and the complex correlation structure among SNPs. In this article, we propose an integrated distance correlation ridge regression (DCRR) approach to accommodate the ultrahigh dimensionality, joint polygenic effects of multiple loci, and the complex LD structures. Initially, a distance correlation (DC) screening approach is used to extensively remove noise, after which LD structure is addressed using a ridge penalized multiple logistic regression (LRR) model. The false discovery rate, true positive discovery rate, and computational cost were simultaneously assessed through a large number of simulations. A binary trait of Arabidopsis thaliana, the hypersensitive response to the bacterial elicitor AvrRpm1, was analyzed in 84 inbred lines (28 susceptibilities and 56 resistances) with 216,130 SNPs. Compared to previous SNP discovery methods implemented on the same data set, the DCRR approach successfully detected the causative SNP while dramatically reducing spurious associations and computational time. PMID:26661113
Bayesian approach for counting experiment statistics applied to a neutrino point source analysis
NASA Astrophysics Data System (ADS)
Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.
2013-12-01
In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.
TAMIS for rectal tumors: advancements of a new approach.
Rega, Daniela; Pace, Ugo; Niglio, Antonello; Scala, Dario; Sassaroli, Cinzia; Delrio, Paolo
2016-03-01
TAMIS allows transanal excision of rectal lesions by the means of a single-incision access port and traditional laparoscopic instruments. This technique represents a promising treatment of rectal neoplasms since it guarantees precise dissection and reproducible approaches. From May 2010 to September 2015, we performed excisions of rectal lesions in 55 patients using a SILS port. The pre-operative diagnosis was 26 tumours, 26 low and high grade displasias and 3 other benign neoplasias. 11 patients had a neoadjuvant treatment. Pneumorectum was established at a pressure of 15-20 mmHg CO2 with continuous insufflation, and ordinary laparoscopic instruments were used to perform full thickness resection of rectal neoplasm with a conventional 5-mm 30° laparoscopic camera. The average operative time was 78 min. Postoperative recovery was uneventful in 53 cases: in one case a Hartmann procedure was necessary at two postoperative days due to an intraoperative intraperitoneal perforation; in another case, a diverting colostomy was required at the five postoperative days due to an intraoperative perforation of the vaginal wall. Unclear resection margins were detected in six patients: thereafter five patients underwent radical surgery; the other patient was unfit for radical surgery, but is actually alive and well. Patients were discharged after a median of 3 days. Transanal minimally invasive surgery is an advanced transanal platform that provides a safe and effective method for low rectal tumors. The feasibility of TAMIS also for malignant lesions treated in a neoadjuvant setting could be cautiously evaluated in the future. PMID:27052544
Zhang, Jiang; Lanham, Kevin A; Heideman, Warren; Peterson, Richard E.; Li, Lingjun
2013-01-01
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) is a persistent environmental pollutant and teratogen that produces cardiac toxicity in the developing zebrafish. Here we adopted a label free quantitative proteomic approach based on normalized spectral abundance factor (NSAF) to investigate the disturbance of the cardiac proteome induced by TCDD in the adult zebrafish heart. The protein expression level changes between heart samples from TCDD treated and control zebrafish were systematically evaluated by a large scale MudPIT analysis which incorporated triplicate analyses for both control and TCDD exposed heart proteomic samples to overcome the data dependant variation in shotgun proteomic experiments and obtain a statistically significant protein dataset with improved quantification confidence. A total of 519 and 443 proteins were identified in hearts collected from control and TCDD treated zebrafish, respectively, among which 106 proteins showed statistically significant expression changes. After correcting for the experimental variation between replicate analyses by statistical evaluation, 55 proteins exhibited NSAF ratio above 2 and 43 proteins displayed NSAF ratio smaller than 0.5, with statistical significance by t-test (p < 0.05). The proteins identified as altered by TCDD encompass a wide range of biological functions including calcium handling, myocardium cell architecture, energy production and metabolism, mitochondrial homeostasis, and stress response. Collectively, our results indicate that TCDD exposure alters the adult zebrafish heart in a way that could result in cardiac hypertrophy and heart failure, and suggests a potential mechanism for the diastolic dysfunction observed in TCDD exposed embryos. PMID:23682714
Griffith, Lauren E.; van den Heuvel, Edwin; Fortier, Isabel; Sohel, Nazmul; Hofer, Scott M.; Payette, Hélène; Wolfson, Christina; Belleville, Sylvie; Kenny, Meghan; Doiron, Dany; Raina, Parminder
2015-01-01
Objectives To identify statistical methods for harmonization which could be used in the context of summary data and individual participant data meta-analysis of cognitive measures. Study Design and Setting Environmental scan methods were used to conduct two reviews to identify: 1) studies that quantitatively combined data on cognition, and 2) general literature on statistical methods for data harmonization. Search results were rapidly screened to identify articles of relevance. Results All 33 meta-analyses combining cognition measures either restricted their analyses to a subset of studies using a common measure or combined standardized effect sizes across studies; none reported their harmonization steps prior to producing summary effects. In the second scan, three general classes of statistical harmonization models were identified: 1) standardization methods, 2) latent variable models, and 3) multiple imputation models; few publications compared methods. Conclusions Although it is an implicit part of conducting a meta-analysis or pooled analysis, the methods used to assess inferential equivalence of complex constructs are rarely reported or discussed. Progress in this area will be supported by guidelines for the conduct and reporting of the data harmonization and integration and by evaluating and developing statistical approaches to harmonization. PMID:25497980
Novel Method of Interconnect Worstcase Establishment with Statistically-Based Approaches
NASA Astrophysics Data System (ADS)
Jung, Won-Young; Kim, Hyungon; Kim, Yong-Ju; Wee, Jae-Kyung
In order for the interconnect effects due to process-induced variations to be applied to the designs in 0.13μm and below, it is necessary to determine and characterize the realistic interconnect worstcase models with high accuracy and speed. This paper proposes new statistically-based approaches to the characterization of realistic interconnect worstcase models which take into account process-induced variations. The Effective Common Geometry (ECG) and Accumulated Maximum Probability (AMP) algorithms have been developed and implemented into the new statistical interconnect worstcase design environment. To verify this statistical interconnect worstcase design environment, the 31-stage ring oscillators are fabricated and measured with UMC 0.13μm Logic process. The 15-stage ring oscillators are fabricated and measured with 0.18μm standard CMOS process for investigating its flexibility in other technologies. The results show that the relative errors of the new method are less than 1.00%, which is two times more accurate than the conventional worstcase method. Furthermore, the new interconnect worstcase design environment improves optimization speed by 29.61-32.01% compared to that of the conventional worstcase optimization. The new statistical interconnect worstcase design environment accurately predicts the worstcase and bestcase corners of non-normal distribution where conventional methods cannot do well.
Robust statistical approaches to assess the degree of agreement of clinical data
NASA Astrophysics Data System (ADS)
Grilo, Luís M.; Grilo, Helena L.
2016-06-01
To analyze the blood of patients who took vitamin B12 for a period of time, two different medicine measurement methods were used (one is the established method, with more human intervention, and the other method uses essentially machines). Given the non-normality of the differences between both measurement methods, the limits of agreement are estimated using also a non-parametric approach to assess the degree of agreement of the clinical data. The bootstrap resampling method is applied in order to obtain robust confidence intervals for mean and median of differences. The approaches used are easy to apply, running a friendly software, and their outputs are also easy to interpret. In this case study the results obtained with (non)parametric approaches lead us to different statistical conclusions, but the decision whether agreement is acceptable or not is always a clinical judgment.
Multi-level approach for statistical appearance models with probabilistic correspondences
NASA Astrophysics Data System (ADS)
Krüger, Julia; Ehrhardt, Jan; Handels, Heinz
2016-03-01
Statistical shape and appearance models are often based on the accurate identification of one-to-one correspondences in a training data set. At the same time, the determination of these corresponding landmarks is the most challenging part of such methods. Hufnagel et al.1 developed an alternative method using correspondence probabilities for a statistical shape model. In Krüuger et al.2, 3 we propose the use of probabilistic correspondences for statistical appearance models by incorporating appearance information into the framework. We employ a point-based representation of image data combining position and appearance information. The model is optimized and adapted by a maximum a-posteriori (MAP) approach deriving a single global optimization criterion with respect to model parameters and observation dependent parameters that directly affects shape and appearance information of the considered structures. Because initially unknown correspondence probabilities are used and a higher number of degrees of freedom is introduced to the model a regularization of the model generation process is advantageous. For this purpose we extend the derived global criterion by a regularization term which penalizes implausible topological changes. Furthermore, we propose a multi-level approach for the optimization, to increase the robustness of the model generation process.
Design of Complex Systems in the presence of Large Uncertainties: a statistical approach
Koutsourelakis, P
2007-07-31
The design or optimization of engineering systems is generally based on several assumptions related to the loading conditions, physical or mechanical properties, environmental effects, initial or boundary conditions etc. The effect of those assumptions to the optimum design or the design finally adopted is generally unknown particularly in large, complex systems. A rational recourse would be to cast the problem in a probabilistic framework which accounts for the various uncertainties but also allows to quantify their effect in the response/behavior/performance of the system. In such a framework the performance function(s) of interest are also random and optimization of the system with respect to the design variables has to be reformulated with respect to statistical properties of these objectives functions (e.g. probability of exceeding certain thresholds). Analysis tools are usually restricted to elaborate legacy codes which have been developed over a long period of time and are generally well-tested (e.g. Finite Elements). These do not however include any stochastic components and their alteration is impossible or ill-advised. Furthermore as the number of uncertainties and design variables grows, the problem quickly becomes computationally intractable. The present paper advocates the use of statistical learning in order to perform these tasks for any system of arbitrary complexity as long as a deterministic solver is available. The proposed computational framework consists of two components. Firstly advanced sampling techniques are employed in order to efficiently explore the dependence of the performance with respect to the uncertain and design variables. The proposed algorithm is directly parallelizable and attempts to maximize the amount of information extracted with the least possible number of calls to the deterministic solver. The output of this process is utilized by statistical classification procedures in order to derive the dependence of the performance
NASA Astrophysics Data System (ADS)
Appelhans, Tim; Mwangomo, Ephraim; Otte, Insa; Detsch, Florian; Nauss, Thomas; Hemp, Andreas; Ndyamkama, Jimmy
2015-04-01
This study introduces the set-up and characteristics of a meteorological station network on the southern slopes of Mt. Kilimanjaro, Tanzania. The set-up follows a hierarchical approach covering an elevational as well as a land-use disturbance gradient. The network consists of 52 basic stations measuring ambient air temperature and above ground air humidity and 11 precipitation measurement sites. We provide in depth descriptions of various machine learning and classical geo-statistical methods used to fill observation gaps and extend the spatial coverage of the network to a total of 60 research sites. Performance statistics for these methods indicate that the presented data sets provide reliable measurements of the meteorological reality at Mt. Kilimanjaro. These data provide an excellent basis for ecological studies and are also of great value for regional atmospheric numerical modelling studies for which such comprehensive in-situ validation observations are rare, especially in tropical regions of complex terrain.
Shelton, M.L.; Gregory, B.A. ); Doughty, R.L.; Kiss, T.; Moses, H.L. . Mechanical Engineering Dept.)
1993-07-01
In aircraft engine design (and in other applications), small improvements in turbine efficiency may be significant. Since analytical tools for predicting transonic turbine losses are still being developed, experimental efforts are required to evaluate various designs, calibrate design methods, and validate CFD analysis tools. However, these experimental efforts must be very accurate to measure the performance differences to the levels required by the highly competitive aircraft engine market. Due to the sensitivity of transonic and supersonic flow fields, it is often difficult to obtain the desired level of accuracy. In this paper, a statistical approach is applied to the experimental evaluation of transonic turbine airfoils in the VPI and SU transonic cascade facility in order to quantify the differences between three different transonic turbine airfoils. This study determines whether the measured performance differences between the three different airfoils are statistically significant. This study also assesses the degree of confidence in the transonic cascade testing process at VPI and SU.
Jacquin, Hugo; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi
2016-01-01
Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of ‘true’ LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations. PMID:27177270
A statistical approach for analyzing the development of 1H multiple-quantum coherence in solids.
Mogami, Yuuki; Noda, Yasuto; Ishikawa, Hiroto; Takegoshi, K
2013-05-21
A novel statistical approach for analyzing (1)H multiple-quantum (MQ) spin dynamics in so-called spin-counting solid-state NMR experiments is presented. The statistical approach is based on the percolation theory with Monte Carlo methods and is examined by applying it to the experimental results of three solid samples having unique hydrogen arrangement for 1-3 dimensions: the n-alkane/d-urea inclusion complex as a one-dimensional (1D) system, whose (1)H nuclei align approximately in 1D, and magnesium hydroxide and adamantane as a two-dimensional (2D) and a three-dimensional (3D) system, respectively. Four lattice models, linear, honeycomb, square and cubic, are used to represent the (1)H arrangement of the three samples. It is shown that the MQ dynamics in adamantane is consistent with that calculated using the cubic lattice and that in Mg(OH)2 with that calculated using the honeycomb and the square lattices. For n-C20H42/d-urea, these 4 lattice models fail to express its result. It is shown that a more realistic model representing the (1)H arrangement of n-C20H42/d-urea can describe the result. The present approach can thus be used to determine (1)H arrangement in solids. PMID:23580152
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-05-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing. PMID:26771896
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations. PMID:26952004
Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach.
Monti, Martin M
2011-01-01
Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making. PMID:21442013
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2008-01-01
The author of this paper submits the thesis that learning requires doing; only through inquiry is learning achieved, and hence this paper proposes a programme of use of a modified Moore method in a Probability and Mathematical Statistics (PAMS) course sequence to teach students PAMS. Furthermore, the author of this paper opines that set theory…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... HUMAN SERVICES Workshop: Advancing Research on Mixtures; New Perspectives and Approaches for Predicting... ``Advancing Research on Mixtures: New Perspectives and Approaches for Predicting Adverse Human Health Effects... Research and Training, NIEHS, P.O. Box 12233, MD K3-04, Research Triangle Park, NC 27709, (telephone)...
ERIC Educational Resources Information Center
Touchton, Michael
2015-01-01
I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
Canadian Educational Approaches for the Advancement of Pharmacy Practice
Louizos, Christopher; Austin, Zubin
2014-01-01
Canadian faculties (schools) of pharmacy are actively engaged in the advancement and restructuring of their programs in response to the shift in pharmacy to pharmacists having/assuming an advanced practitioner role. Unfortunately, there is a paucity of evidence outlining optimal strategies for accomplishing this task. This review explores several educational changes proposed in the literature to aid in the advancement of pharmacy education such as program admission requirements, critical-thinking assessment and teaching methods, improvement of course content delivery, value of interprofessional education, advancement of practical experiential education, and mentorship strategies. Collectively, implementation of these improvements to pharmacy education will be crucial in determining the direction the profession will take. PMID:25258448
Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H.M.
2012-01-01
Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett–Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium. PMID:24031928
A statistical approach to close packing of elastic rods and to DNA packaging in viral capsids
Katzav, E.; Adda-Bedia, M.; Boudaoud, A.
2006-01-01
We propose a statistical approach for studying the close packing of elastic rods. This phenomenon belongs to the class of problems of confinement of low dimensional objects, such as DNA packaging in viral capsids. The method developed is based on Edwards' approach, which was successfully applied to polymer physics and to granular matter. We show that the confinement induces a configurational phase transition from a disordered (isotropic) phase to an ordered (nematic) phase. In each phase, we derive the pressure exerted by the rod (DNA) on the container (capsid) and the force necessary to inject (eject) the rod into (out of) the container. Finally, we discuss the relevance of the present results with respect to physical and biological problems. Regarding DNA packaging in viral capsids, these results establish the existence of ordered configurations, a hypothesis upon which previous calculations were built. They also show that such ordering can result from simple mechanical constraints. PMID:17146049
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Sivasamy, Aneetha Avalappampatty; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
Morris, Jeffrey S.
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods
A statistical approach to evaluate flood risk at the regional level: an application to Italy
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods
Connectometry: A statistical approach harnessing the analytical potential of the local connectome.
Yeh, Fang-Cheng; Badre, David; Verstynen, Timothy
2016-01-15
Here we introduce the concept of the local connectome: the degree of connectivity between adjacent voxels within a white matter fascicle defined by the density of the diffusing spins. While most human structural connectomic analyses can be summarized as finding global connectivity patterns at either end of anatomical pathways, the analysis of local connectomes, termed connectometry, tracks the local connectivity patterns along the fiber pathways themselves in order to identify the subcomponents of the pathways that express significant associations with a study variable. This bottom-up analytical approach is made possible by reconstructing diffusion MRI data into a common stereotaxic space that allows for associating local connectomes across subjects. The substantial associations can then be tracked along the white matter pathways, and statistical inference is obtained using permutation tests on the length of coherent associations and corrected for multiple comparisons. Using two separate samples, with different acquisition parameters, we show how connectometry can capture variability within core white matter pathways in a statistically efficient manner and extract meaningful variability from white matter pathways, complements graph-theoretic connectomic measures, and is more sensitive than region-of-interest approaches. PMID:26499808
NASA Astrophysics Data System (ADS)
Baran, Sándor; Möller, Annette
2016-06-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
NASA Astrophysics Data System (ADS)
Broothaerts, Nils; Verstraeten, Gert
2016-04-01
Reconstructing and quantifying human impact is an important step to understand human-environment interactions in the past. To fully understand the role of human impact in altering the environment during the Holocene, detailed reconstructions of the vegetation changes and quantitative measures of human impact on the landscape are needed. Statistical analysis of pollen data has recently been used to characterize vegetation changes and to extract semi-quantitative data on human impact. In this study, multivariate statistical analysis (cluster analysis and non-metric multidimensional scaling (NMDS)) of pollen data was used to reconstruct human induced land use changes in two contrasting environments: central Belgium and SW Turkey. For each region, pollen data from different study sites were integrated. The data from central Belgium shows the gradually increasing human impact from the Bronze Age onwards (ca. 3900 cal a BP), except for a temporary halt between 1900-1600 cal a BP, coupled with the Migration Period in Europe. Statistical analysis of pollen data from SW Turkey provides new integrated information on changing human impact through time in the Sagalassos territory, and shows that human impact was most intense during the Hellenistic and Roman Period (ca. 2200-1750 cal a BP) and decreased and changed in nature afterwards. In addition, regional vegetation estimates using the REVEALS model were made for each study site and were compared with the outcome of the statistical analysis of the pollen data. It shows that for some cases the statistical approach can be a more easily applicable alternative for the REVEALS model. Overall, the presented examples from two contrasting environments shows that cluster analysis and NMDS are useful tools to provide semi-quantitative insights in the temporal and spatial vegetation changes related to increasing human impact. Moreover, the technique can be used to compare and integrate pollen datasets from different study sites within
NASA Astrophysics Data System (ADS)
Donner, Reik; Passow, Christian
2016-04-01
The appropriate statistical evaluation of recent changes in the occurrence of hydro-meteorological extreme events is of key importance for identifying trends in the behavior of climate extremes and associated impacts on ecosystems or technological infrastructures, as well as for validating the capability of models used for future climate scenarios to correctly represent such trends in the past decades. In this context, most recent studies have utilized conceptual approaches from extreme value theory based on parametric descriptions of the probability distribution functions of extremes. However, the application of such methods is faced with a few fundamental challenges: (1) The application of the most widely used approaches of generalized extreme value (GEV) or generalized Pareto (GP) distributions is based on assumptions the validity of which can often be hardly proven. (2) Due to the differentiation between extreme and normal values (peaks-over-threshold, block maxima), much information on the distribution of the variable of interest is not used at all by such methods, implying that the sample size of values effectively used for estimating the parameters of the GEV or GP distributions is largely limited for typical lengths of observational series. (3) The problem of parameter estimation is further enhanced by the variety of possibly statistical models mapping different aspects of temporal changes of extremes like seasonality or possibly non-linear trends. Reliably identifying the most appropriate model is a challenging task for the lengths of typical observational series. As an alternative to approaches based on extreme value theory, there have been a few attempts to transfer quantile regression approaches to statistically describing the time-dependence of climate extremes. In this context, a value exceeding a certain instantaneous percentile of the time-dependent probability distribution function of the data under study is considered to be an extreme event. In
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M.; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
Comparing emerging and mature markets during times of crises: A non-extensive statistical approach
NASA Astrophysics Data System (ADS)
Namaki, A.; Koohi Lai, Z.; Jafari, G. R.; Raei, R.; Tehrani, R.
2013-07-01
One of the important issues in finance and economics for both scholars and practitioners is to describe the behavior of markets, especially during times of crises. In this paper, we analyze the behavior of some mature and emerging markets with a Tsallis entropy framework that is a non-extensive statistical approach based on non-linear dynamics. During the past decade, this technique has been successfully applied to a considerable number of complex systems such as stock markets in order to describe the non-Gaussian behavior of these systems. In this approach, there is a parameter q, which is a measure of deviation from Gaussianity, that has proved to be a good index for detecting crises. We investigate the behavior of this parameter in different time scales for the market indices. It could be seen that the specified pattern for q differs for mature markets with regard to emerging markets. The findings show the robustness of the stated approach in order to follow the market conditions over time. It is obvious that, in times of crises, q is much greater than in other times. In addition, the response of emerging markets to global events is delayed compared to that of mature markets, and tends to a Gaussian profile on increasing the scale. This approach could be very useful in application to risk and portfolio management in order to detect crises by following the parameter q in different time scales.
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method. PMID:25384711
New advances in methodology for statistical tests useful in geostatistical studies
Borgman, L.E.
1988-05-01
Methodology for statistical procedures to perform tests of hypothesis pertaining to various aspects of geostatistical investigations has been slow in developing. The correlated nature of the data precludes most classical tests and makes the design of new tests difficult. Recent studies have led to modifications of the classical t test which allow for the intercorrelation. In addition, results for certain nonparametric tests have been obtained. The conclusions of these studies provide a variety of new tools for the geostatistician in deciding questions on significant differences and magnitudes.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Busquets, Anthony M.
2000-01-01
A simulation experiment was performed to assess situation awareness (SA) and workload of pilots while monitoring simulated autoland operations in Instrument Meteorological Conditions with three advanced display concepts: two enhanced electronic flight information system (EFIS)-type display concepts and one totally synthetic, integrated pictorial display concept. Each concept incorporated sensor-derived wireframe runway and iconic depictions of sensor-detected traffic in different locations on the display media. Various scenarios, involving conflicting traffic situation assessments, main display failures, and navigation/autopilot system errors, were used to assess the pilots' SA and workload during autoland approaches with the display concepts. From the results, for each scenario, the integrated pictorial display concept provided the pilots with statistically equivalent or substantially improved SA over the other display concepts. In addition to increased SA, subjective rankings indicated that the pictorial concept offered reductions in overall pilot workload (in both mean ranking and spread) over the two enhanced EFIS-type display concepts. Out of the display concepts flown, the pilots ranked the pictorial concept as the display that was easiest to use to maintain situational awareness, to monitor an autoland approach, to interpret information from the runway and obstacle detecting sensor systems, and to make the decision to go around.
Recent treatment advances and novel therapeutic approaches in epilepsy
Serrano, Enrique
2015-01-01
The purpose of this article is to review recent advances in the treatment of epilepsy. It includes five antiepileptic drugs that have been recently added to the pharmacologic armamentarium and surgical techniques that have been developed in the last few years. Finally, we review ongoing research that may have a potential role in future treatments of epilepsy. PMID:26097734
Measuring Alumna Career Advancement: An Approach Based on Educational Expectations.
ERIC Educational Resources Information Center
Ben-Ur, Tamar; Rogers, Glen
Alverno College (Wisconsin), a women's liberal arts college, has developed an Alumni Career Level Classification (AACLC) scheme to measure alumna career advancement and demonstrate institutional accountability. This validation study was part of a larger longitudinal study of two entire cohorts of students entering the college in 1976 and 1977, of…
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-06-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. PMID:26558345
Comparison of six statistical approaches in the selection of appropriate fish growth models
NASA Astrophysics Data System (ADS)
Zhu, Lixin; Li, Lifang; Liang, Zhenlin
2009-09-01
The performance of six statistical approaches, which can be used for selection of the best model to describe the growth of individual fish, was analyzed using simulated and real length-at-age data. The six approaches include coefficient of determination ( R 2), adjusted coefficient of determination (adj.- R 2), root mean squared error (RMSE), Akaike’s information criterion (AIC), bias correction of AIC (AIC c ) and Bayesian information criterion (BIC). The simulation data were generated by five growth models with different numbers of parameters. Four sets of real data were taken from the literature. The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data. The best supported model by the data was identified using each of the six approaches. The results show that R 2 and RMSE have the same properties and perform worst. The sample size has an effect on the performance of adj.- R 2, AIC, AIC c and BIC. Adj.- R 2 does better in small samples than in large samples. AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large. AIC c and BIC have best performance in small and large sample cases, respectively. Use of AIC c or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Vanclooster, Marnik
2013-04-01
Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.
McManamay, Ryan A
2014-01-01
Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at
A risk-based approach to management of leachables utilizing statistical analysis of extractables.
Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M
2015-04-01
To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle. PMID:25294001
NASA Astrophysics Data System (ADS)
García, G. D.; Sánchez-Varretti, F. O.; Romá, F.; Ramirez-Pastor, A. J.
2009-04-01
A simple statistical mechanical approach for studying multilayer adsorption of interacting rigid molecular chains of length k ( k-mers) has been presented. The new theoretical framework has been developed on a generalization in the spirit of the lattice-gas model and the classical Bragg-Williams (BWA) and quasi-chemical (QCA) approximations. The derivation of the equilibrium equations allows the extension of the well-known Brunauer-Emmet-Teller (BET) isotherm to more complex systems. The formalism reproduces the classical theory for monomers, leads to the exact statistical thermodynamics of interacting k-mers adsorbed in one dimension, and provides a close approximation for two-dimensional systems accounting multisite occupancy and lateral interactions in the first layer. Comparisons between analytical data and Monte Carlo simulations were performed in order to test the validity of the theoretical model. The study showed that: (i) the resulting thermodynamic description obtained from QCA is significantly better than that obtained from BWA and still mathematically handable; (ii) for non-interacting k-mers, the BET equation leads to an underestimate of the true monolayer volume; (iii) attractive lateral interactions compensate the effect of the multisite occupancy and the monolayer volume predicted by BET equation agrees very well with the corresponding true value; and (iv) repulsive couplings between the ad-molecules hamper the formation of the monolayer and the BET results are not good (even worse than those obtained in the non-interacting case).
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. PMID:25733352
A statistical approach for estimating the radiocesium interception potential of soils
Waegeneers, N.; Smolders, E.; Merckx, R.
1999-05-01
The solid-liquid distribution of radiocesium ({sup 137}Cs) varies extensively among soil types. A statistical approach was used to relate {sup 137}Cs sorption with soil properties. Eighty-eight pasture soils were sampled in Belgium. All soil samples were characterized for texture, cation exchange capacity, exchangeable K, Ca, and Mg content, pH and organic C content. Soil samples were equilibrate din a 10{sup {minus}1} M CaCl{sub 2}-5.10{sup {minus}4} M KCl solution. Adsorption of {sup 137}Cs was measured after 16 h in this medium. The solid-liquid concentration ratio, K{sub D}, of {sup 137}Cs in this specific ionic scenario allows for the calculation of the Radiocesium Interception Potential (RIP), which in turn can be used for K{sub D} predictions for a wide range of different ionic scenarios. The RIP varied from 54 to 11,200 mmol{sub c} kg{sup {minus}1}. Stepwise multiple regression between RIP and soil characteristics yielded statistically significant models for samples from each northern and southern Belgium. For northern Belgium the regression model only included texture fractions whereas for southern Belgium the regressors were texture fractions and soil pH. The correlations between the RIP and soil characteristics depend strongly on the geological origin of the soil.
Chemical entity recognition in patents by combining dictionary-based and statistical approaches.
Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents. PMID:27141091
Kumar, Ramya; Lahann, Joerg
2016-07-01
The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %. PMID:27268965
Generalized Deam-Edwards approach to the statistical mechanics of randomly crosslinked systems
NASA Astrophysics Data System (ADS)
Xing, Xiangjun; Lu, Bing-Sui; Ye, Fangfu; Goldbart, Paul M.
2013-08-01
We address the statistical mechanics of randomly and permanently crosslinked networks. We develop a theoretical framework (vulcanization theory) which can be used to systematically analyze the correlation between the statistical properties of random networks and their histories of formation. Generalizing the original idea of Deam and Edwards, we consider an instantaneous crosslinking process, where all crosslinkers (modeled as Gaussian springs) are introduced randomly at once in an equilibrium liquid state, referred to as the preparation state. The probability that two functional sites are crosslinked by a spring exponentially decreases with their distance squared. After formally averaging over network connectivity, we obtained an effective theory with all degrees of freedom replicated 1 + n times. Two thermodynamic ensembles, the preparation ensemble and the measurement ensemble, naturally appear in this theory. The former describes the thermodynamic fluctuations in the state of preparation, while the latter describes the thermodynamic fluctuations in the state of measurement. We classify various correlation functions and discuss their physical significances. In particular, the memory correlation functions characterize how the properties of networks depend on their method of preparation, and are the hallmark properties of all randomly crosslinked materials. We clarify the essential difference between our approach and that of Deam-Edwards, and discuss the saddle-point order parameters and its physical significance. Finally we also discuss the connection between saddle-point approximation of vulcanization theory, and the classical theory of rubber elasticity as well as the neo-classical theory of nematic elastomers.
A Hybrid Statistics/Amplitude Approach to the Theory of Interacting Drift Waves and Zonal Flows
NASA Astrophysics Data System (ADS)
Parker, Jeffrey; Krommes, John
2012-10-01
An approach to the theory of drift-wave--zonal-flow systems is adopted in which only the DW statistics but the full ZF amplitude are kept. Any statistical description of turbulence must inevitably face the closure problem. A particular closure, the Stochastic Structural Stability Theory (SSST), has been recently studied in plasmafootnotetextB. F. Farrell and P. J. Ioannou, Phys. Plasmas 16, 112903 (2009). as well as atmospheric-science contexts. First, the predictions of the SSST are examined in the weakly inhomogeneous limit, using the generalized Hasegawa--Mima model as a simple example. It is found that the equations do not admit a complete solution, as the characteristic ZF scale cannot be calculated. To address that deficiency, an analysis is performed of a bifurcation from a DW-only state to a DW--ZF state in the Hasegawa--Wakatani model in order to gain analytical insight into a nonlinear DW--ZF equilibrium, including prediction of the charactistic scale. The calculation permits discussion of the relative importance of eddy shearing and coupling to damped eigenmodes for the saturation of the self-consistently regulated turbulence level.
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
NASA Astrophysics Data System (ADS)
Anagnostopoulou, Christina
2016-04-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
Chemical entity recognition in patents by combining dictionary-based and statistical approaches
Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091
Rapp, J.B.
1991-01-01
Q-mode factor analysis was used to quantitate the distribution of the major aliphatic hydrocarbon (n-alkanes, pristane, phytane) systems in sediments from a variety of marine environments. The compositions of the pure end members of the systems were obtained from factor scores and the distribution of the systems within each sample was obtained from factor loadings. All the data, from the diverse environments sampled (estuarine (San Francisco Bay), fresh-water (San Francisco Peninsula), polar-marine (Antarctica) and geothermal-marine (Gorda Ridge) sediments), were reduced to three major systems: a terrestrial system (mostly high molecular weight aliphatics with odd-numbered-carbon predominance), a mature system (mostly low molecular weight aliphatics without predominance) and a system containing mostly high molecular weight aliphatics with even-numbered-carbon predominance. With this statistical approach, it is possible to assign the percentage contribution from various sources to the observed distribution of aliphatic hydrocarbons in each sediment sample. ?? 1991.
NASA Astrophysics Data System (ADS)
Alamino, R. C.; Saad, D.
2008-06-01
Using methods of statistical physics, we study the average number and kernel size of general sparse random matrices over Galois fields GF(q) , with a given connectivity profile, in the thermodynamical limit of large matrices. We introduce a mapping of GF(q) matrices onto spin systems using the representation of the cyclic group of order q as the q th complex roots of unity. This representation facilitates the derivation of the average kernel size of random matrices using the replica approach, under the replica-symmetric ansatz, resulting in saddle point equations for general connectivity distributions. Numerical solutions are then obtained for particular cases by population dynamics. Similar techniques also allow us to obtain an expression for the exact and average numbers of random matrices for any general connectivity profile. We present numerical results for particular distributions.
NASA Astrophysics Data System (ADS)
Haas, R.; Pinto, J. G.
2012-12-01
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
A Hybrid Statistical-Physical Regression Approach for Flood Depth Estimation in Southeastern Europe
NASA Astrophysics Data System (ADS)
Galasso, C.; Senarath, S. U.; Duggan, D. P.
2012-12-01
The development of detailed flood inundation maps is typically done through the use of hydraulic models. However, the development of a hydraulic model for a large hydrological basin requires the availability of a significant amount of data, time and computational resources. This becomes challenging in countries that lack detailed and high-resolution topography data, and information on stream and floodplain characteristics. Regression-based regionalization approaches can be used as an acceptable alternative for the development of depth versus river discharge relationships in catchments with sparse data, provided that a suitable data set exists in another data rich basin for the generation of the necessary regression relationships. However, these applications are not straightforward, and pose challenges and complexities that require unique solutions. In this study we describe a methodology to build hybrid statistical-physical regression equations to estimate the flood depths in flood-prone yet data scarce basins for pre-specified return periods. These regression relationships use physically-based flood data from other data rich basins with similar characteristics. In particular, the methodology uses the geomorphological properties of individual watersheds and the characteristic of the site as explanatory variables to derive these statistical relationships. The total sample of watersheds in the data rich area is randomly partitioned into complementary subsets: one subset is used to develop the model; while the other subset is the utilized to validate it. A stepwise multiple linear regression approach is employed to train the model and isolate the best subset of predictive variables. The model partitioning is repeated multiple times, in order to determine the expected values of the regression coefficients, resulting in a set of simple equations with high goodness of fit measures. An illustrative example of the developed methodology is presented for some selected
Statistical downscaling of rainfall: a non-stationary and multi-resolution approach
NASA Astrophysics Data System (ADS)
Rashid, Md. Mamunur; Beecham, Simon; Chowdhury, Rezaul Kabir
2016-05-01
A novel downscaling technique is proposed in this study whereby the original rainfall and reanalysis variables are first decomposed by wavelet transforms and rainfall is modelled using the semi-parametric additive model formulation of Generalized Additive Model in Location, Scale and Shape (GAMLSS). The flexibility of the GAMLSS model makes it feasible as a framework for non-stationary modelling. Decomposition of a rainfall series into different components is useful to separate the scale-dependent properties of the rainfall as this varies both temporally and spatially. The study was conducted at the Onkaparinga river catchment in South Australia. The model was calibrated over the period 1960 to 1990 and validated over the period 1991 to 2010. The model reproduced the monthly variability and statistics of the observed rainfall well with Nash-Sutcliffe efficiency (NSE) values of 0.66 and 0.65 for the calibration and validation periods, respectively. It also reproduced well the seasonal rainfall over the calibration (NSE = 0.37) and validation (NSE = 0.69) periods for all seasons. The proposed model was better than the tradition modelling approach (application of GAMLSS to the original rainfall series without decomposition) at reproducing the time-frequency properties of the observed rainfall, and yet it still preserved the statistics produced by the traditional modelling approach. When downscaling models were developed with general circulation model (GCM) historical output datasets, the proposed wavelet-based downscaling model outperformed the traditional downscaling model in terms of reproducing monthly rainfall for both the calibration and validation periods.
A Statistical Approach to Scanning the Biomedical Literature for Pharmacogenetics Knowledge
Rubin, Daniel L.; Thorn, Caroline F.; Klein, Teri E.; Altman, Russ B.
2005-01-01
Objective: Biomedical databases summarize current scientific knowledge, but they generally require years of laborious curation effort to build, focusing on identifying pertinent literature and data in the voluminous biomedical literature. It is difficult to manually extract useful information embedded in the large volumes of literature, and automated intelligent text analysis tools are becoming increasingly essential to assist in these curation activities. The goal of the authors was to develop an automated method to identify articles in Medline citations that contain pharmacogenetics data pertaining to gene–drug relationships. Design: The authors built and evaluated several candidate statistical models that characterize pharmacogenetics articles in terms of word usage and the profile of Medical Subject Headings (MeSH) used in those articles. The best-performing model was used to scan the entire Medline article database (11 million articles) to identify candidate pharmacogenetics articles. Results: A sampling of the articles identified from scanning Medline was reviewed by a pharmacologist to assess the precision of the method. The authors' approach identified 4,892 pharmacogenetics articles in the literature with 92% precision. Their automated method took a fraction of the time to acquire these articles compared with the time expected to be taken to accumulate them manually. The authors have built a Web resource (http://pharmdemo.stanford.edu/pharmdb/main.spy) to provide access to their results. Conclusion: A statistical classification approach can screen the primary literature to pharmacogenetics articles with high precision. Such methods may assist curators in acquiring pertinent literature in building biomedical databases. PMID:15561790
NASA Astrophysics Data System (ADS)
Hertig, E.; Jacobeit, J.
2013-01-01
In the present study, nonstationarities in predictor-predictand relationships within the framework of statistical downscaling are investigated. In this context, a novel validation approach is introduced in which nonstationarities are explicitly taken into account. The method is based on results from running calibration periods. The (non)overlaps of the bootstrap confidence interval of the mean model performance (derived by averaging the performances of all calibration/verification periods) and the bootstrap confidence intervals of the individual model errors are used to identify (non)stationary model performance. The specified procedure is demonstrated for mean daily precipitation in the Mediterranean area using the bias to assess model skill. A combined circulation-based and transfer function-based approach is employed as a downscaling technique. In this context, large-scale seasonal atmospheric regimes, synoptic-scale daily circulation patterns, and their within-type characteristics, are related to daily station-based precipitation. Results show that nonstationarities are due to varying predictors-precipitation relationships of specific circulation configurations. In this regard, frequency changes of circulation patterns can damp or increase the effects of nonstationary relationships. Within the scope of assessing future precipitation changes under increased greenhouse warming conditions, the identification and analysis of nonstationarities in the predictors-precipitation relationships leads to a substantiated selection of specific statistical downscaling models for the future assessments. Using RCP4.5 scenario assumptions, strong increases of daily precipitation become apparent over large parts of the western and northern Mediterranean regions in winter. In spring, summer, and autumn, decreases of precipitation until the end of the 21st century clearly dominate over the entire Mediterranean area.
Strategists and Non-Strategists in Austrian Enterprises—Statistical Approaches
NASA Astrophysics Data System (ADS)
Duller, Christine
2011-09-01
The purpose of this work is to determine with a modern statistical approach which variables can indicate whether an arbitrary enterprise uses strategic management as basic business concept. "Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly (i.e. regularly) to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment or a new social, financial or political environment." [12] In Austria 70% to 80% of all enterprises can be classified as family firms. In literature the empirically untested hypothesis can be found that family firms tend to have less formalised management accounting systems than non-family enterprises. But it is unknown whether the use of strategic management accounting systems is influenced more by the fact of structure (family or non-family enterprise) or by the effect of size (number of employees). Therefore, the goal is to split up enterprises into two subgroups, namely strategists and non-strategists and to get information on the variables of influence (size, structure, branches, etc.). Two statistical approaches are used: On the one hand a classical cluster analysis is implemented to design two subgroups and on the other hand a latent class model is built up for this problem. After a description of the theoretical background first results of both strategies are compared.
Recent advances in rational approaches for enzyme engineering
Steiner, Kerstin; Schwab, Helmut
2012-01-01
Enzymes are an attractive alternative in the asymmetric syntheses of chiral building blocks. To meet the requirements of industrial biotechnology and to introduce new functionalities, the enzymes need to be optimized by protein engineering. This article specifically reviews rational approaches for enzyme engineering and de novo enzyme design involving structure-based approaches developed in recent years for improvement of the enzymes’ performance, broadened substrate range, and creation of novel functionalities to obtain products with high added value for industrial applications. PMID:24688651
Augustine, Swinburne A J; Simmons, Kaneatra J; Eason, Tarsha N; Griffin, Shannon M; Curioso, Clarissa L; Wymer, Larry J; Fout, G Shay; Grimm, Ann C; Oshima, Kevin H; Dufour, Al
2015-10-01
There are numerous pathogens that can be transmitted through water. Identifying and understanding the routes and magnitude of exposure or infection to these microbial contaminants are critical to assessing and mitigating risk. Conventional approaches of studying immunological responses to exposure or infection such as Enzyme-Linked Immunosorbent Assays (ELISAs) and other monoplex antibody-based immunoassays can be very costly, laborious, and consume large quantities of patient sample. A major limitation of these approaches is that they can only be used to measure one analyte at a time. Multiplex immunoassays provide the ability to study multiple pathogens simultaneously in microliter volumes of samples. However, there are several challenges that must be addressed when developing these multiplex immunoassays such as selection of specific antigens and antibodies, cross-reactivity, calibration, protein-reagent interferences, and the need for rigorous optimization of protein concentrations. In this study, a Design of Experiments (DOE) approach was used to optimize reagent concentrations for coupling selected antigens to Luminex™ xMAP microspheres for use in an indirect capture, multiplex immunoassay to detect human exposure or infection from pathogens that are potentially transmitted through water. Results from Helicobacter pylori, Campylobacter jejuni, Escherichia coli O157:H7, and Salmonella typhimurium singleplexes were used to determine the mean concentrations that would be applied to the multiplex assay. Cut-offs to differentiate between exposed and non-exposed individuals were determined using finite mixed modeling (FMM). The statistical approaches developed facilitated the detection of Immunoglobulin G (IgG) antibodies to H. pylori, C. jejuni, Toxoplasma gondii, hepatitis A virus, rotavirus and noroviruses (VA387 and Norwalk strains) in fifty-four diagnostically characterized plasma samples. Of the characterized samples, the detection rate was 87.5% for H
A three-dimensional statistical approach to improved image quality for multislice helical CT.
Thibault, Jean-Baptiste; Sauer, Ken D; Bouman, Charles A; Hsieh, Jiang
2007-11-01
Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications. PMID:18072519
Abut, Fatih; Akay, Mehmet Fatih
2015-01-01
Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance. PMID:26346869
Advanced Stirling Convertor Dynamic Test Approach and Results
NASA Technical Reports Server (NTRS)
Meer, David W.; Hill, Dennis; Ursic, Joseph
2009-01-01
The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Converters (ASC) at NASA John H. Glenn Research Center undergo a vibration test sequence intended to simulate the vibration history of an ASC used in an ASRG for a space mission. This sequence includes testing at Workmanship and Flight Acceptance levels interspersed with periods of extended operation to simulate pre and post fueling. The final step in the test sequence utilizes additional testing at Flight Acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit ( ASRG-EU) at Lockheed Martin. This paper presents the vibration test plan for current and future ASC units, including the modified input spectra, and the results of recent tests using these spectra. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.
NASA Astrophysics Data System (ADS)
Herschtal, A.; Foroudi, F.; Greer, P. B.; Eade, T. N.; Hindson, B. R.; Kron, T.
2012-05-01
Early approaches to characterizing errors in target displacement during a fractionated course of radiotherapy assumed that the underlying fraction-to-fraction variability in target displacement, known as the ‘treatment error’ or ‘random error’, could be regarded as constant across patients. More recent approaches have modelled target displacement allowing for differences in random error between patients. However, until recently it has not been feasible to compare the goodness of fit of alternate models of random error rigorously. This is because the large volumes of real patient data necessary to distinguish between alternative models have only very recently become available. This work uses real-world displacement data collected from 365 patients undergoing radical radiotherapy for prostate cancer to compare five candidate models for target displacement. The simplest model assumes constant random errors across patients, while other models allow for random errors that vary according to one of several candidate distributions. Bayesian statistics and Markov Chain Monte Carlo simulation of the model parameters are used to compare model goodness of fit. We conclude that modelling the random error as inverse gamma distributed provides a clearly superior fit over all alternatives considered. This finding can facilitate more accurate margin recipes and correction strategies.
Feron, Gilles; Ayed, Charfedinne; Qannari, El Mostafa; Courcoux, Philippe; Laboure, Hélène; Guichard, Elisabeth
2014-01-01
For human beings, the mouth is the first organ to perceive food and the different signalling events associated to food breakdown. These events are very complex and as such, their description necessitates combining different data sets. This study proposed an integrated approach to understand the relative contribution of main food oral processing events involved in aroma release during cheese consumption. In vivo aroma release was monitored on forty eight subjects who were asked to eat four different model cheeses varying in fat content and firmness and flavoured with ethyl propanoate and nonan-2-one. A multiblock partial least square regression was performed to explain aroma release from the different physiological data sets (masticatory behaviour, bolus rheology, saliva composition and flux, mouth coating and bolus moistening). This statistical approach was relevant to point out that aroma release was mostly explained by masticatory behaviour whatever the cheese and the aroma, with a specific influence of mean amplitude on aroma release after swallowing. Aroma release from the firmer cheeses was explained mainly by bolus rheology. The persistence of hydrophobic compounds in the breath was mainly explained by bolus spreadability, in close relation with bolus moistening. Resting saliva poorly contributed to the analysis whereas the composition of stimulated saliva was negatively correlated with aroma release and mostly for soft cheeses, when significant. PMID:24691625
Feron, Gilles; Ayed, Charfedinne; Qannari, El Mostafa; Courcoux, Philippe; Laboure, Hélène; Guichard, Elisabeth
2014-01-01
For human beings, the mouth is the first organ to perceive food and the different signalling events associated to food breakdown. These events are very complex and as such, their description necessitates combining different data sets. This study proposed an integrated approach to understand the relative contribution of main food oral processing events involved in aroma release during cheese consumption. In vivo aroma release was monitored on forty eight subjects who were asked to eat four different model cheeses varying in fat content and firmness and flavoured with ethyl propanoate and nonan-2-one. A multiblock partial least square regression was performed to explain aroma release from the different physiological data sets (masticatory behaviour, bolus rheology, saliva composition and flux, mouth coating and bolus moistening). This statistical approach was relevant to point out that aroma release was mostly explained by masticatory behaviour whatever the cheese and the aroma, with a specific influence of mean amplitude on aroma release after swallowing. Aroma release from the firmer cheeses was explained mainly by bolus rheology. The persistence of hydrophobic compounds in the breath was mainly explained by bolus spreadability, in close relation with bolus moistening. Resting saliva poorly contributed to the analysis whereas the composition of stimulated saliva was negatively correlated with aroma release and mostly for soft cheeses, when significant. PMID:24691625
A U-Statistic-based random Forest approach for genetic association study.
Li, Ming; Peng, Ruo-Sin; Wei, Changshuai; Lu, Qing
2012-01-01
Variations in complex traits are influenced by multiple genetic variants, environmental risk factors, and their interactions. Though substantial progress has been made in identifying single genetic variants associated with complex traits, detecting the gene-gene and gene-environment interactions remains a great challenge. When a large number of genetic variants and environmental risk factors are involved, searching for interactions is limited to pair-wise interactions due to the exponentially increased feature space and computational intensity. Alternatively, recursive partitioning approaches, such as random forests, have gained popularity in high-dimensional genetic association studies. In this article, we propose a U-Statistic-based random forest approach, referred to as Forest U-Test, for genetic association studies with quantitative traits. Through simulation studies, we showed that the Forest U-Test outperformed exiting methods. The proposed method was also applied to study Cannabis Dependence (CD), using three independent datasets from the Study of Addiction: Genetics and Environment. A significant joint association was detected with an empirical p-value less than 0.001. The finding was also replicated in two independent datasets with p-values of 5.93e-19 and 4.70e-17, respectively. PMID:22652671
NASA Astrophysics Data System (ADS)
Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel
In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.
A protocol for classifying ecologically relevant marine zones, a statistical approach
NASA Astrophysics Data System (ADS)
Verfaillie, Els; Degraer, Steven; Schelfaut, Kristien; Willems, Wouter; Van Lancker, Vera
2009-06-01
Mapping ecologically relevant zones in the marine environment has become increasingly important. Biological data are however often scarce and alternatives are being sought in optimal classifications of abiotic variables. The concept of 'marine landscapes' is based on a hierarchical classification of geological, hydrographic and other physical data. This approach is however subject to many assumptions and subjective decisions. An objective protocol for zonation is being proposed here where abiotic variables are subjected to a statistical approach, using principal components analysis (PCA) and a cluster analysis. The optimal number of clusters (or zones) is being defined using the Calinski-Harabasz criterion. The methodology has been applied on datasets of the Belgian part of the North Sea (BPNS), a shallow sandy shelf environment with a sandbank-swale topography. The BPNS was classified into 8 zones that represent well the natural variability of the seafloor. The internal cluster consistency was validated with a split-run procedure, with more than 99% correspondence between the validation and the original dataset. The ecological relevance of 6 out of the 8 zones was demonstrated, using indicator species analysis. The proposed protocol, as exemplified for the BPNS, can easily be applied to other areas and provides a strong knowledge basis for environmental protection and management of the marine environment. A SWOT-analysis, showing the strengths, weaknesses, opportunities and threats of the protocol was performed.
NASA Technical Reports Server (NTRS)
Yeh, Leehwa
1993-01-01
The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite-mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena.
Arciuli, Joanne; Torkildsen, Janne von Koss
2012-01-01
Mastery of language can be a struggle for some children. Amongst those that succeed in achieving this feat there is variability in proficiency. Cognitive scientists remain intrigued by this variation. A now substantial body of research suggests that language acquisition is underpinned by a child’s capacity for statistical learning (SL). Moreover, a growing body of research has demonstrated that variability in SL is associated with variability in language proficiency. Yet, there is a striking lack of longitudinal data. To date, there has been no comprehensive investigation of whether a capacity for SL in young children is, in fact, associated with language proficiency in subsequent years. Here we review key studies that have led to the need for this longitudinal research. Advancing the language acquisition debate via longitudinal research has the potential to transform our understanding of typical development as well as disorders such as autism, specific language impairment, and dyslexia. PMID:22969746
Advanced numerical methods and software approaches for semiconductor device simulation
CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.
2000-03-23
In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.
Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation
Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.
2000-01-01
In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less
Papaneophytou, Christos; Kontopidis, George
2016-04-01
During a discovery project of potential inhibitors for three proteins, TNF-α, RANKL and HO-1, implicated in the pathogenesis of rheumatoid arthritis, significant amounts of purified proteins were required. The application of statistically designed experiments for screening and optimization of induction conditions allows rapid identification of the important factors and interactions between them. We have previously used response surface methodology (RSM) for the optimization of soluble expression of TNF-α and RANKL. In this work, we initially applied RSM for the optimization of recombinant HO-1 and a 91% increase of protein production was achieved. Subsequently, we slightly modified a published incomplete factorial approach (called IF1) in order to evaluate the effect of three expression variables (bacterial strains, induction temperatures and culture media) on soluble expression levels of the three tested proteins. However, soluble expression yields of TNF-α and RANKL obtained by the IF1 method were significantly lower (<50%) than those obtained by RSM. We further modified the IF1 approach by replacing the culture media with induction times and the resulted method called IF-STT (Incomplete Factorial-Stain/Temperature/Time) was validated using the three proteins. Interestingly, soluble expression levels of the three proteins obtained by IF-STT were only 1.2-fold lower than those obtained by RSM. Although RSM is probably the best approach for optimization of biological processes, the IF-STT is faster, it examines the most important factors (bacterial strain, temperature and time) influencing protein soluble expression in a single experiment, and can be used in any recombinant protein expression project as a starting point. PMID:26721705
Transfer matrix approach to the statistical mechanics of single polymer molecules
NASA Astrophysics Data System (ADS)
Livadaru, Lucian
In this work, we demonstrate, implement and critically assess the capabilities and the limitations of the Transfer Matrix (TM) method to the statistical mechanics of single polymer molecules within their classical models. We first show how the TM can be employed with the help of computers, to provide highly accurate results for the configurational statistics of polymers in theta-conditions. We proceed gradually from simple to complex polymer models, analyzing their statistical properties as we vary the model parameters. In the order of their complexity, the polymer models approached in this work are: (i) the freely jointed chain (FJC); (ii) the freely rotating chain (FRC); (iii) the rotational isomeric state (RIS) model with and without energy parameters; (iv) the continuous rotational potential model (for n-alkanes); (v) an interacting chain model (ICM) with virtual bonds for poly(ethylene glycol)(PEG). The Statistical Mechanics of polymer chains is carried out in both the Helmholtz and Gibbs ensembles, depending on the quantities of interest. In the Helmholtz ensemble the polymer's Green function is generally a function of both the spatial coordinates and orientations of chain bonds. In the Gibbs ensemble its arguments are the bond orientations with respect to an applied external force. This renders the latter ensemble more feasible for an accurate study of the mechanical properties of the mentioned models. We adapt the TM method to study statistical and thermodynamical properties of various models, including: chain end distribution functions, characteristic ratios, mean square radius of gyration, Kuhn length, static structure factor, pair correlation function, force-extension curves, Helmholtz and Gibbs free energies. For all cases, the TM calculations yielded accurate results for all these quantities. Wherever possible, we compared our findings to other results, theoretical or experimental in literature. A great deal of effort was focused on precise
A Trait-Based Approach to Advance Coral Reef Science.
Madin, Joshua S; Hoogenboom, Mia O; Connolly, Sean R; Darling, Emily S; Falster, Daniel S; Huang, Danwei; Keith, Sally A; Mizerek, Toni; Pandolfi, John M; Putnam, Hollie M; Baird, Andrew H
2016-06-01
Coral reefs are biologically diverse and ecologically complex ecosystems constructed by stony corals. Despite decades of research, basic coral population biology and community ecology questions remain. Quantifying trait variation among species can help resolve these questions, but progress has been hampered by a paucity of trait data for the many, often rare, species and by a reliance on nonquantitative approaches. Therefore, we propose filling data gaps by prioritizing traits that are easy to measure, estimating key traits for species with missing data, and identifying 'supertraits' that capture a large amount of variation for a range of biological and ecological processes. Such an approach can accelerate our understanding of coral ecology and our ability to protect critically threatened global ecosystems. PMID:26969335
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
NASA Astrophysics Data System (ADS)
Driouech, F.; Déqué, M.; Sánchez-Gómez, E.
2009-09-01
range covered by these RCMs for all the climate indices considered. In order to validate, in the case of Moroccan winter precipitation, a statistical downscaling approach that uses large scale fields to construct local scenarios of future climate change, the link between north Atlantic weather regimes and Moroccan local precipitation has been investigated, in terms of precipitation average, and the frequencies of occurrence of wet and intense precipitation days. The robustness of the statistical approach considered is evaluated using the outputs of ARPEGE-Climate and also those of the 10 ENSEMBLES-RCMs.
Using a Statistical Approach to Anticipate Leaf Wetness Duration Under Climate Change in France
NASA Astrophysics Data System (ADS)
Huard, F.; Imig, A. F.; Perrin, P.
2014-12-01
Leaf wetness plays a major role in the development of fungal plant diseases. Leaf wetness duration (LWD) above a threshold value is determinant for infection and can be seen as a good indicator of impact of climate on infection occurrence and risk. As LWD is not widely measured, several methods, based on physics and empirical approach, have been developed to estimate it from weather data. Many LWD statistical models do exist, but the lack of standard for measurements require reassessments. A new empirical LWD model, called MEDHI (Modèle d'Estimation de la Durée d'Humectation à l'Inra) was developed for french configuration for wetness sensors (angle : 90°, height : 50 cm). This deployment is different from what is usually recommended from constructors or authors in other countries (angle from 10 to 60°, height from 10 to 150 cm…). MEDHI is a decision support system based on hourly climatic conditions at time steps n and n-1 taking account relative humidity, rainfall and previously simulated LWD. Air temperature, relative humidity, wind speed, rain and LWD data from several sensors with 2 configurations were measured during 6 months in Toulouse and Avignon (South West and South East of France) to calibrate MEDHI. A comparison of empirical models : NHRH (RH threshold), DPD (dew point depression), CART (classification and regression tree analysis dependant on RH, wind speed and dew point depression) and MEDHI, using meteorological and LWD measurements obtained during 5 months in Toulouse, showed that the development of this new model MEHDI was definitely better adapted to French conditions. In the context of climate change, MEDHI was used for mapping the evolution of leaf wetness duration in France from 1950 to 2100 with the French regional climate model ALADIN under different Representative Concentration Pathways (RCPs) and using a QM (Quantile-Mapping) statistical downscaling method. Results give information on the spatial distribution of infection risks
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m(-3)), moderate (50-99 s m(-3)), high (100-149 s m(-3)) and very high (150 < n s m(-3)), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68). PMID:26266481
NASA Astrophysics Data System (ADS)
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m-3), moderate (50-99 s m-3), high (100-149 s m-3) and very high (150 < n s m-3), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).
Funnel function approach to determine uncertainty: Some advances
NASA Astrophysics Data System (ADS)
Routh, P. S.
2006-12-01
Given a finite number of noisy data it is difficult (perhaps impossible) to obtain unique average of the model value in any region of the model (Backus & Gilbert, 1970; Oldenburg, 1983). This difficulty motivated Backus and Gilbert to construct the averaging kernels that is in some sense close to delta function. Averaging kernels describe how the true model is averaged over the entire domain to generate the model value in the region of interest. An unique average value is difficult to obtain theoretically. However we can compute the bounds on the average value and this allows us to obtain a measure of uncertainty. This idea was proposed by Oldenburg (1983). As the region of interest increases the uncertainty decreases associated with the average value giving a funnel like shape. Mathematically this is equivalent to solving minimization and maximization problem of average value (Oldenburg, 1983). In this work I developed a nonlinear interior point method to solve this min-max problem and construct the bounds. The bounds determined in this manner honors all types of available information: (a) geophysical data with errors (b) deterministic or statistical prior information and (c ) complementary information from other data sets at different scales (such as hydrology or other geophysical data) if they are formulated in a joint inversion framework.
Treatment of advanced Hodgkin's lymphoma: standard and experimental approaches.
Engert, A; Wolf, J; Diehl, V
1999-07-01
The introduction of polychemotherapy and improved radiation techniques has transformed Hodgkin's lymphoma from an incurable disease to a malignancy with one of the highest cure rates. Milestones were the development of the MOPP (mechlorethamine, vincristine, procarbazine, and prednisone) and ABVD (doxorubicin, bleomycin, vinblastine, and dacarbazine) regimens. Radiotherapy is commonly used, although its precise role has not been defined for patients with advanced-stage disease. More recently, dose-intensified schedules such as Stanford V (doxorubicin, vinblastine, mechlorethamine, vincristine, bleomycin, etoposide, and prednisone) were shown to be effective in this group of patients. In particular, the BEACOPP regimen (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone), in both standard and escalated doses, has produced impressive results in a randomized three-arm study when compared with COPP (cyclophosphamide, vincristine, procarbazine, and prednisone)/ABVD. The significantly higher rates of complete remission (CR) and freedom from treatment failure (FFTF) suggest that the new BEACOPP regimen improves efficacy, but definitive conclusions require further years of follow-up evaluation. Interestingly, BEACOPP abrogates the impact of the newly described seven-factor prognostic scoring system that was reported for patients treated with MOPP/ABVD or similar regimens. The prognostic index includes factors such as serum albumin, hemoglobin, male sex, stage IV disease, age more than 45 years, white blood cell count, and lymphocyte count. Whereas patients with Hodgkin's lymphoma have a good prognosis on first diagnosis, those with relapsed or refractory disease face a poor outcome. PMID:10462328
Advanced Modular Power Approach to Affordable, Supportable Space Systems
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Kimnach, Greg L.; Fincannon, James; Mckissock,, Barbara I.; Loyselle, Patricia L.; Wong, Edmond
2013-01-01
Recent studies of missions to the Moon, Mars and Near Earth Asteroids (NEA) indicate that these missions often involve several distinct separately launched vehicles that must ultimately be integrated together in-flight and operate as one unit. Therefore, it is important to see these vehicles as elements of a larger segmented spacecraft rather than separate spacecraft flying in formation. The evolution of large multi-vehicle exploration architecture creates the need (and opportunity) to establish a global power architecture that is common across all vehicles. The Advanced Exploration Systems (AES) Modular Power System (AMPS) project managed by NASA Glenn Research Center (GRC) is aimed at establishing the modular power system architecture that will enable power systems to be built from a common set of modular building blocks. The project is developing, demonstrating and evaluating key modular power technologies that are expected to minimize non-recurring development costs, reduce recurring integration costs, as well as, mission operational and support costs. Further, modular power is expected to enhance mission flexibility, vehicle reliability, scalability and overall mission supportability. The AMPS project not only supports multi-vehicle architectures but should enable multi-mission capability as well. The AMPS technology development involves near term demonstrations involving developmental prototype vehicles and field demonstrations. These operational demonstrations not only serve as a means of evaluating modular technology but also provide feedback to developers that assure that they progress toward truly flexible and operationally supportable modular power architecture.
Advancing Partnerships Towards an Integrated Approach to Oil Spill Response
NASA Astrophysics Data System (ADS)
Green, D. S.; Stough, T.; Gallegos, S. C.; Leifer, I.; Murray, J. J.; Streett, D.
2015-12-01
Oil spills can cause enormous ecological and economic devastation, necessitating application of the best science and technology available, and remote sensing is playing a growing critical role in the detection and monitoring of oil spills, as well as facilitating validation of remote sensing oil spill products. The FOSTERRS (Federal Oil Science Team for Emergency Response Remote Sensing) interagency working group seeks to ensure that during an oil spill, remote sensing assets (satellite/aircraft/instruments) and analysis techniques are quickly, effectively, appropriately, and seamlessly available to oil spills responders. Yet significant challenges remain for addressing oils spanning a vast range of chemical properties that may be spilled from the Tropics to the Arctic, with algorithms and scientific understanding needing advances to keep up with technology. Thus, FOSTERRS promotes enabling scientific discovery to ensure robust utilization of available technology as well as identifying technologies moving up the TRL (Technology Readiness Level). A recent FOSTERRS facilitated support activity involved deployment of the AVIRIS NG (Airborne Visual Infrared Imaging Spectrometer- Next Generation) during the Santa Barbara Oil Spill to validate the potential of airborne hyperspectral imaging to real-time map beach tar coverage including surface validation data. Many developing airborne technologies have potential to transition to space-based platforms providing global readiness.
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.; Bruckner-Lea, Cindy J.; Marks, James D.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
Retrieving leaf area index from remotely sensed data using advanced statistical approaches
Technology Transfer Automated Retrieval System (TEKTRAN)
Mapping and monitoring leaf area index (LAI) is important for spatially distributed modeling of surface energy balance, evapotranspiration and vegetation productivity. Remote sensing can facilitate the rapid collection of LAI information on individual fields over large areas, in a time and cost-effe...
Tapiovaara, M J; Wagner, R F
1993-01-01
A method of measuring the image quality of medical imaging equipment is considered within the framework of statistical decision theory. In this approach, images are regarded as random vectors and image quality is defined in the context of the image information available for performing a specified detection or discrimination task. The approach provides a means of measuring image quality, as related to the detection of an image detail of interest, without reference to the actual physical mechanisms involved in image formation and without separate measurements of signal transfer characteristics or image noise. The measurement does not, however, consider deterministic errors in the image; they need a separate evaluation for imaging modalities where they are of concern. The detectability of an image detail can be expressed in terms of the ideal observer's signal-to-noise ratio (SNR) at the decision level. Often a good approximation to this SNR can be obtained by employing sub-optimal observers, whose performance correlates well with the performance of human observers as well. In this paper the measurement of SNR is based on implementing algorithmic realizations of specified observers and analysing their responses while actually performing a specified detection task of interest. Three observers are considered: the ideal prewhitening matched filter, the non-prewhitening matched filter, and the DC-suppressing non-prewhitening matched filter. The construction of the ideal observer requires an impractical amount of data and computing, except for the most simple imaging situations. Therefore, the utilization of sub-optimal observers is advised and their performance in detecting a specified signal is discussed. Measurement of noise and SNR has been extended to include temporally varying images and dynamic imaging systems. PMID:8426870
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators.
Woods, Mae L; Leon, Miriam; Perez-Carrasco, Ruben; Barnes, Chris P
2016-06-17
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G
2016-03-01
Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design. PMID:26922749
Score As You Lift (SAYL): A Statistical Relational Learning Approach to Uplift Modeling
Nassif, Houssam; Kuusisto, Finn; Burnside, Elizabeth S.; Page, David; Shavlik, Jude; Costa, Vítor Santos
2015-01-01
We introduce Score As You Lift (SAYL), a novel Statistical Relational Learning (SRL) algorithm, and apply it to an important task in the diagnosis of breast cancer. SAYL combines SRL with the marketing concept of uplift modeling, uses the area under the uplift curve to direct clause construction and final theory evaluation, integrates rule learning and probability assignment, and conditions the addition of each new theory rule to existing ones. Breast cancer, the most common type of cancer among women, is categorized into two subtypes: an earlier in situ stage where cancer cells are still confined, and a subsequent invasive stage. Currently older women with in situ cancer are treated to prevent cancer progression, regardless of the fact that treatment may generate undesirable side-effects, and the woman may die of other causes. Younger women tend to have more aggressive cancers, while older women tend to have more indolent tumors. Therefore older women whose in situ tumors show significant dissimilarity with in situ cancer in younger women are less likely to progress, and can thus be considered for watchful waiting. Motivated by this important problem, this work makes two main contributions. First, we present the first multi-relational uplift modeling system, and introduce, implement and evaluate a novel method to guide search in an SRL framework. Second, we compare our algorithm to previous approaches, and demonstrate that the system can indeed obtain differential rules of interest to an expert on real data, while significantly improving the data uplift. PMID:26158122
Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Sithamparam, Logachanthirika
2013-09-15
Production cost of enzyme is largely determined by the type of the strain and raw material used to propagate the strain. Hence, selection of the strain and raw materials is crucial in enzyme production. For Glucose oxidase (GOx), previous studies showed Aspergillus terreus UniMAP AA-1 offers a better alternative to the existing sources. Thus, a lower production cost could be logically anticipated by growing the strain in a cheaper complex media such as molasses. In this work, sugar cane molasses, supplemented with urea and carbonate salt and a locally isolated strain Aspergillus terreus UniMAP AA-1 were used to produce a crude GOx enzyme in a small scale. A statistical optimization approach namely Response Surface Methodology (RSM) was used to optimize the media components for highest GOx activity. It was found that the highest GOx activity was achieved using a combination of molasses, carbonate salt and urea at concentration 32.51, 4.58 and 0.93% (w/v), respectively. This study provides an alternative optimized media conditions for GOx production using locally available raw materials. PMID:24502155
A hybrid approach to crowd density estimation using statistical leaning and texture classification
NASA Astrophysics Data System (ADS)
Li, Yin; Zhou, Bowen
2013-12-01
Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.
Lipid binding protein response to a bile acid library: a combined NMR and statistical approach.
Tomaselli, Simona; Pagano, Katiuscia; Boulton, Stephen; Zanzoni, Serena; Melacini, Giuseppe; Molinari, Henriette; Ragona, Laura
2015-11-01
Primary bile acids, differing in hydroxylation pattern, are synthesized from cholesterol in the liver and, once formed, can undergo extensive enzyme-catalysed glycine/taurine conjugation, giving rise to a complex mixture, the bile acid pool. Composition and concentration of the bile acid pool may be altered in diseases, posing a general question on the response of the carrier (bile acid binding protein) to the binding of ligands with different hydrophobic and steric profiles. A collection of NMR experiments (H/D exchange, HET-SOFAST, ePHOGSY NOESY/ROESY and (15) N relaxation measurements) was thus performed on apo and five different holo proteins, to monitor the binding pocket accessibility and dynamics. The ensemble of obtained data could be rationalized by a statistical approach, based on chemical shift covariance analysis, in terms of residue-specific correlations and collective protein response to ligand binding. The results indicate that the same residues are influenced by diverse chemical stresses: ligand binding always induces silencing of motions at the protein portal with a concomitant conformational rearrangement of a network of residues, located at the protein anti-portal region. This network of amino acids, which do not belong to the binding site, forms a contiguous surface, sensing the presence of the bound lipids, with a signalling role in switching protein-membrane interactions on and off. PMID:26260520
Numerical study of chiral plasma instability within the classical statistical field theory approach
NASA Astrophysics Data System (ADS)
Buividovich, P. V.; Ulybyshev, M. V.
2016-07-01
We report on a numerical study of real-time dynamics of electromagnetically interacting chirally imbalanced lattice Dirac fermions within the classical statistical field theory approach. Namely, we perform exact simulations of the real-time quantum evolution of fermionic fields coupled to classical electromagnetic fields, which are in turn coupled to the vacuum expectation value of the fermionic electric current. We use Wilson-Dirac Hamiltonian for fermions, and noncompact action for the gauge field. In general, we observe that the backreaction of fermions on the electromagnetic field prevents the system from acquiring chirality imbalance. In the case of chirality pumping in parallel electric and magnetic fields, the electric field is screened by the produced on-shell fermions and the accumulation of chirality is hence stopped. In the case of evolution with initially present chirality imbalance, axial charge tends to transform to helicity of the electromagnetic field. By performing simulations on large lattices we show that in most cases this decay process is accompanied by the inverse cascade phenomenon, which transfers energy from short-wavelength to long-wavelength electromagnetic fields. In some simulations, however, we observe a very clear signature of inverse cascade for the helical magnetic fields that is not accompanied by the axial charge decay. This suggests that the relation between the inverse cascade and axial charge decay is not as straightforward as predicted by the simplest form of anomalous Maxwell equations.
Statistical approaches to detecting and analyzing tandem repeats in genomic sequences.
Anisimova, Maria; Pečerska, Julija; Schaper, Elke
2015-01-01
Tandem repeats (TRs) are frequently observed in genomes across all domains of life. Evidence suggests that some TRs are crucial for proteins with fundamental biological functions and can be associated with virulence, resistance, and infectious/neurodegenerative diseases. Genome-scale systematic studies of TRs have the potential to unveil core mechanisms governing TR evolution and TR roles in shaping genomes. However, TR-related studies are often non-trivial due to heterogeneous and sometimes fast evolving TR regions. In this review, we discuss these intricacies and their consequences. We present our recent contributions to computational and statistical approaches for TR significance testing, sequence profile-based TR annotation, TR-aware sequence alignment, phylogenetic analyses of TR unit number and order, and TR benchmarks. Importantly, all these methods explicitly rely on the evolutionary definition of a tandem repeat as a sequence of adjacent repeat units stemming from a common ancestor. The discussed work has a focus on protein TRs, yet is generally applicable to nucleic acid TRs, sharing similar features. PMID:25853125
NASA Astrophysics Data System (ADS)
Seetha, D.; Velraj, G.
2015-10-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated.
Seetha, D; Velraj, G
2015-10-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated. PMID:25942086
NASA Technical Reports Server (NTRS)
Lua, Yuan J.; Liu, Wing K.; Belytschko, Ted
1992-01-01
A stochastic damage model for predicting the rupture of a brittle multiphase material is developed, based on the microcrack-macrocrack interaction. The model, which incorporates uncertainties in locations, orientations, and numbers of microcracks, characterizes damage by microcracking and fracture by macrocracking. A parametric study is carried out to investigate the change of the stress intensity at the macrocrack tip by the configuration of microcracks. The inherent statistical distribution of the fracture toughness arising from the intrinsic random nature of microcracks is explored using a statistical approach. For this purpose, a computer simulation model is introduced, which incorporates a statistical characterization of geometrical parameters of a random microcrack array.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
Waves and Wine: Advanced approaches for characterizing and exploiting micro-terroir
NASA Astrophysics Data System (ADS)
Hubbard, S. S.; Grote, K. R.; Freese, P.; Peterson, J. E.; Rubin, Y.
2012-12-01
uses a combination of advanced characterization techniques (including airborne imagery, microclimate, and surface geophysical data) with statistical approaches to identify vineyard zones that have fairly uniform soil, vegetation, and micrometeorological parameters. Obtained information is used in simple water balance models that can be used to design block-specific irrigation parameters. This effort has illustrated how straightforward numerical techniques and commercially available characterization approaches can be used to optimize block layout and to guide precision irrigation strategies, leading to optimized and uniform vegetation and winegrape characteristics within vineyard blocks. Recognition and incorporation of information of small scale variabilities into vineyard development and management practices could lead to winegrapes that better reflect the microterroir of the area. Advanced approaches, such as those described here, are expected to become increasingly important as available land and water resources continue to decrease, as spatially extensive datasets become less costly to collect and interpret, and as the public demand for high quality wine produced in environmentally friendly manner continues to increase.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Advanced fractal approach for unsupervised classification of SAR images
NASA Astrophysics Data System (ADS)
Pant, Triloki; Singh, Dharmendra; Srivastava, Tanuja
2010-06-01
Unsupervised classification of Synthetic Aperture Radar (SAR) images is the alternative approach when no or minimum apriori information about the image is available. Therefore, an attempt has been made to develop an unsupervised classification scheme for SAR images based on textural information in present paper. For extraction of textural features two properties are used viz. fractal dimension D and Moran's I. Using these indices an algorithm is proposed for contextual classification of SAR images. The novelty of the algorithm is that it implements the textural information available in SAR image with the help of two texture measures viz. D and I. For estimation of D, the Two Dimensional Variation Method (2DVM) has been revised and implemented whose performance is compared with another method, i.e., Triangular Prism Surface Area Method (TPSAM). It is also necessary to check the classification accuracy for various window sizes and optimize the window size for best classification. This exercise has been carried out to know the effect of window size on classification accuracy. The algorithm is applied on four SAR images of Hardwar region, India and classification accuracy has been computed. A comparison of the proposed algorithm using both fractal dimension estimation methods with the K-Means algorithm is discussed. The maximum overall classification accuracy with K-Means comes to be 53.26% whereas overall classification accuracy with proposed algorithm is 66.16% for TPSAM and 61.26% for 2DVM.
Recent Advances in Treatment Approaches of Mucopolysaccharidosis VI.
Giugliani, Roberto; Carvalho, Clarissa Gutiérrez; Herber, Silvani; de Camargo Pinto, Louise Lapagesse
2011-06-01
Mucopolysaccharidosis VI is caused by accumulation of the glycosaminoglycan dermatan sulfate in all tissues due to decreased activity of the enzyme arylsulfatase B. Patients exhibit multisystemic signs and symptoms in a chronic and progressive manner, especially with changes in the skeleton, cardiopulmonary system, cornea, skin, liver, spleen and meninges. Patients usually have normal intelligence. In the past, treatment of mucopolysaccharidoses was limited to palliative medical care. The outcome for affected patients improved with the introduction of new technologies as hematopoietic stem cell transplantation, relegated to specific situations after enzyme replacement therapy (ERT) became available. The specific ERT for MPS VI, galsulfase (Naglazyme®, Biomarin Pharmaceutical) was approved in 2005 by FDA and in 2006 by EMEA, and three clinical studies including 56 patients have evaluated the efficacy and safety. Long-term follow up data with patients treated up to 5 years showed that ERT is well tolerated and associated with sustained improvements in the patients' clinical condition. Intrathecal ERT may be considered in situations of high neurosurgical risk but still it is experimental in humans, as is intra-articular ERT. It is possible that the full impact of this therapy will only be demonstrated when patients are identified and treated soon after birth, as it was shown that early introduction of ERT produced immune tolerance and improved enzyme effectiveness in the cat model. New insights on the pathophysiology of MPS disorders are leading to alternative therapeutic approaches, as gene therapy, inflammatory response modulators and substrate reduction therapy. PMID:21506914
Fracture and electric current in the crust: a q-statistical approach
NASA Astrophysics Data System (ADS)
Cartwright-Taylor, A. L.; Vallianatos, F.; Sammonds, P. R.
2013-12-01
We have conducted room-temperature, triaxial compression experiments on samples of Carrara marble, recording concurrently acoustic and electric current signals emitted during deformation as well as mechanical loading information and ultrasonic wave velocities. Our results reveal that, in a non-piezoelectric rock under simulated crustal conditions, a measurable and increasing electric current (nA) is generated within the stressed sample in the region beyond (quasi-)linear elastic deformation; i.e. in the region of permanent deformation beyond the yield point of the material and in the presence of microcracking. This has implications for the earthquake preparation process. Our results extend to shallow crustal conditions previous observations of electric current signals in quartz-free rocks undergoing uniaxial deformation, supporting the idea of a universal electrification mechanism related to deformation; a number of which have been proposed. Confining pressure conditions of our slow strain rate experiments range from the purely brittle regime to the semi-brittle transition where cataclastic flow is the dominant deformation mechanism. Electric current evolution under these two confining pressures shows some markedly different features, implying the existence of a current-producing mechanism during both microfracture and frictional sliding, possibly related to crack localisation. In order to analyse these 'pressure-stimulated' electric currents, we adopt an entropy-based non-extensive statistical physics approach that is particularly suited to the study of fracture-related phenomena. In the presence of a long timescale (hours) external driving force (i.e. loading), the measured electric current exhibits transient, nonstationary behaviour with strong fluctuations over short timescales (seconds); calmer periods punctuated by bursts of strong activity. We find that the probability distribution of normalised electric current fluctuations over short time intervals (0.5s
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
NASA Astrophysics Data System (ADS)
Zhou, Ying; Cheng, Shuiyuan; Chen, Dongsheng; Lang, Jianlei; Zhao, Beibei; Wei, Wei
2014-09-01
This paper, which aims at the primary gaseous air pollutants (i.e., SO2, NOx, VOCS and CO), is the third paper in the series papers published in Atmospheric Environment to develop new emission estimation models by the regression method. A group of regression models for various industrial and non-industrial sectors were proposed based on an emission investigation case study of Handan region in northern China. The main data requirements of the regression models for industrial sectors were coal consumption, oil consumption, gaseous fuel consumption and annual industrial output. The data requirements for non-industrial sector emission estimations were the population, the number of resident population households, the vehicle population, the area of construction sites, the forestland area, and the orchard area. The models were then applied to Tangshan region in northern China. The results showed that the developed regression models had relatively satisfactory performance. The modeling errors at the regional level for SO2, NOx, VOCS and CO were -16.5%, -10.6%, -11.8% and -22.6%, respectively. The corresponding modeling errors at the county level were 39.9%, 33.9%, 46.3% and 46.9%, respectively. The models were also applied to other regions in northern China. The results revealed that the new models could develop emission inventories with generally lower error than found in previous emission inventory studies. The developed models had the advantages of only using publicly available statistical information for developing high-accuracy and high-resolution emission inventory, without requiring detailed data investigation which is necessary by conventional “bottom-up” emission inventory development approach.
Ström, Peter; Støer, Nathalie; Borthwick, Nicola; Dong, Tao; Hanke, Tomáš; Reilly, Marie
2016-08-01
To investigate in detail the effect of infection or vaccination on the human immune system, ELISpot assays are used to simultaneously test the immune response to a large number of peptides of interest. Scientists commonly use "peptide pools", where, instead of an individual peptide, a test well contains a group of peptides. Since the response from a well may be due to any or many of the peptides in the pool, pooled assays usually need to be followed by confirmatory assays of a number of individual peptides. We present a statistical method that enables estimation of individual peptide responses from pool responses using the Expectation Maximization (EM) algorithm for "incomplete data". We demonstrate the accuracy and precision of these estimates in simulation studies of ELISpot plates with 90 pools of 6 or 7 peptides arranged in three dimensions and three Mock wells for the estimation of background. In analysis of real pooled data from 6 subjects in a HIV-1 vaccine trial, where 199 peptides were arranged in 80 pools if size 9 or 10, our estimates were in very good agreement with the results from individual-peptide confirmatory assays. Compared to the classical approach, we could identify almost all the same peptides with high or moderate response, with less than half the number of confirmatory tests. Our method facilitates efficient use of the information available in pooled ELISpot data to avoid or reduce the need for confirmatory testing. We provide an easy-to-use free online application for implementing the method, where on uploading two spreadsheets with the pool design and pool responses, the user obtains the estimates of the individual peptide responses. PMID:27196788
NASA Astrophysics Data System (ADS)
Crucifix, Michel; Wilkinson, Richard; Carson, Jake; Preston, Simon; Alemeida, Carlos; Rougier, Jonathan
2013-04-01
The existence of an action of astronomical forcing on the Pleistocene climate is almost undisputed. However, quantifying this action is not straightforward. In particular, the phenomenon of deglaciation is generally interpreted as a manifestation of instability, which is typical of non-linear systems. As a consequence, explaining the Pleistocene climate record as the addition of an astronomical contribution and noise-as often done using harmonic analysis tools-is potentially deceptive. Rather, we advocate a methodology in which non-linear stochastic dynamical systems are calibrated on the Pleistocene climate record. The exercise, though, requires careful statistical reasoning and state-of-the-art techniques. In fact, the problem has been judged to be mathematically 'intractable and unsolved' and some pragmatism is justified. In order to illustrate the methodology we consider one dynamical system that potentially captures four dynamical features of the Pleistocene climate : the existence of a saddle-node bifurcation in at least one of its slow components, a time-scale separation between a slow and a fast component, the action of astronomical forcing, and the existence a stochastic contribution to the system dynamics. This model is obviously not the only possible representation of Pleistocene dynamics, but it encapsulates well enough both our theoretical and empirical knowledge into a very simple form to constitute a valid starting point. The purpose of this poster is to outline the practical challenges in calibrating such a model on paleoclimate observations. Just as in time series analysis, there is no one single and universal test or criteria that would demonstrate the validity of an approach. Several methods exist to calibrate the model and judgement develops by the confrontation of the results of the different methods. In particular, we consider here the Kalman filter variants, the Particle Monte-Carlo Markov Chain, and two other variants of Sequential Monte
NASA Astrophysics Data System (ADS)
Otero, Noelia; Butler, Tim; Sillmann, Jana
2015-04-01
Air pollution has become a serious problem in many industrialized and densely-populated urban areas due to its negative effects on human health, damages agricultural crops and ecosystems. The concentration of air pollutants is the result of several factors, including emission sources, lifetime and spatial distribution of the pollutants, atmospheric properties and interactions, wind speed and direction, and topographic features. Episodes of air pollution are often associated with stationary or slowly migrating anticyclonic (high-pressure) systems that reduce advection, diffusion, and deposition of atmospheric pollutants. Certain weather conditions facilitate the concentration of pollutants, such as the incidence of light winds that contributes to the increasing of stagnation episodes affecting air quality. Therefore, the atmospheric circulation plays an important role in air quality conditions that are affected by both, synoptic and local scale processes. This study assesses the influence of the large-scale circulation along with meteorological conditions on tropospheric ozone in Europe. The frequency of weather types (WTs) is examined under a novel approach, which is based on an automated version of the Lamb Weather Types catalog (Jenkinson and Collison, 1977). Here, we present an implementation of such classification point-by-point over the European domain. Moreover, the analysis uses a new grid-averaged climatology (1°x1°) of daily surface ozone concentrations from observations of individual sites that matches the resolution of global models (Schnell,et al., 2014). Daily frequency of WTs and meteorological conditions are combined in a multiple regression approach for investigating the influence on ozone concentrations. Different subsets of predictors are examined within multiple linear regression models (MLRs) for each grid cell in order to identify the best regression model. Several statistical metrics are applied for estimating the robustness of the
Yang, Jinzhong; Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A.; Zhang, Lifei; Balter, Peter; Court, Laurence E.; Li, X. Allen; Dong, Lei
2014-05-01
Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively and to
A Constructivist Approach in a Blended E-Learning Environment for Statistics
ERIC Educational Resources Information Center
Poelmans, Stephan; Wessa, Patrick
2015-01-01
In this study, we report on the students' evaluation of a self-constructed constructivist e-learning environment for statistics, the compendium platform (CP). The system was built to endorse deeper learning with the incorporation of statistical reproducibility and peer review practices. The deployment of the CP, with interactive workshops and…
A Workshop Approach Using Spreadsheets for the Teaching of Statistics and Probability.
ERIC Educational Resources Information Center
Hall, A. G.
1995-01-01
Describes an introductory course on data, statistics, and probability given to first-year electronic engineering students at the University of Hull (United Kingdom); it is taught via workshops using spreadsheets. The four components are data and graphs, random data and statistics, probability distributions, and probability and events. (AEF)
NASA Astrophysics Data System (ADS)
Mazzitello, Karina I.; Candia, Julián
2012-12-01
In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.
NASA Astrophysics Data System (ADS)
Salman, Ahmad; Lapidot, Itshak; Pomerantz, Ami; Tsror, Leah; Shufan, Elad; Moreh, Raymond; Mordechai, Shaul; Huleihel, Mahmoud
2012-01-01
The early diagnosis of phytopathogens is of a great importance; it could save large economical losses due to crops damaged by fungal diseases, and prevent unnecessary soil fumigation or the use of fungicides and bactericides and thus prevent considerable environmental pollution. In this study, 18 isolates of three different fungi genera were investigated; six isolates of Colletotrichum coccodes, six isolates of Verticillium dahliae and six isolates of Fusarium oxysporum. Our main goal was to differentiate these fungi samples on the level of isolates, based on their infrared absorption spectra obtained using the Fourier transform infrared-attenuated total reflection (FTIR-ATR) sampling technique. Advanced statistical and mathematical methods: principal component analysis (PCA), linear discriminant analysis (LDA), and k-means were applied to the spectra after manipulation. Our results showed significant spectral differences between the various fungi genera examined. The use of k-means enabled classification between the genera with a 94.5% accuracy, whereas the use of PCA [3 principal components (PCs)] and LDA has achieved a 99.7% success rate. However, on the level of isolates, the best differentiation results were obtained using PCA (9 PCs) and LDA for the lower wavenumber region (800-1775 cm-1), with identification success rates of 87%, 85.5%, and 94.5% for Colletotrichum, Fusarium, and Verticillium strains, respectively.
This paper presents a new method based on a statistical approach of estimating the uncertainty in simulating the transport and dispersion of atmospheric pollutants. The application of the method has been demonstrated by using observations and modeling results from a tracer experi...
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
ERIC Educational Resources Information Center
Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.
2014-01-01
How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…
NASA Astrophysics Data System (ADS)
Stein, Thorwald; Hogan, Robin; Hanley, Kirsty; Clark, Peter; Halliwell, Carol; Lean, Humphrey; Nicol, John; Plant, Robert
2016-04-01
National weather services increasingly use convection-permitting simulations to assist in their operational forecasts. The skill in forecasting rainfall from convection is much improved in such simulations compared to global models that rely on parameterisation schemes, but it is less obvious if and how increased model resolution or more advanced mixing and microphysics schemes improve the physical representation of convective storms. Here, we present a novel statistical approach using high-resolution radar data to evaluate the morphology, dynamics, and evolution of convective storms over southern England. In the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) we have used an innovative track-and-scan approach to target individual storms with the Chilbolton radar, which measures cloud and precipitation at scales less than 300m out to 100km. These radar observations provide three-dimensional storm volumes and estimates of updraft core strength and sizes at adequate scales to test high-resolution models. For two days of interest, we have run the Met Office forecast model at its operational configuration (1.5km grid length) and at grid lengths of 500m, 200m, and 100m. Radar reflectivity and Doppler winds were simulated from the model cloud and wind output for a like-with-like comparison against the radar observations. Our results show that although the 1.5km simulation produces similar domain-averaged rainfall as the other simulations, the majority of rainfall is produced from storms that are a factor 1.5-2 larger than observed as well as longer lived, while the updrafts of these storms are an order of magnitude greater than estimated from observations. We generally find improvements as model resolution increases, although our results depend strongly on the mixing-length parameter in the model turbulence scheme. Our findings highlight the promising role of high-resolution radar data and observational strategies targeting individual storms
Scheffer, Hester J; Melenhorst, Marleen C A M; Vogel, Jantien A; van Tilborg, Aukje A J M; Nielsen, Karin; Kazemier, Geert; Meijerink, Martijn R
2015-06-01
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively. PMID:25288173
Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.
2015-06-15
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.
NASA Astrophysics Data System (ADS)
Souza-Filho, C. A.; Macedo-Junior, A. F.; Macêdo, A. M. S.
2014-03-01
We apply the method of multivariate hypergeometric generating function to study cumulants of the charge counting statistics (CCS) of a ballistic chaotic cavity coupled ideally to two electron reservoirs via perfect conducting leads with an arbitrary number of scattering channels. The underlying chaotic dynamics causes each cumulant of CCS, denoted charge transfer cumulant (CTC), to behave like a random variable, which we describe via exact calculations of its statistical moments and cumulants. Besides reproducing several known results of the literature for the first few statistical cumulants of conductance and shot-noise power, which are the first and second CTC respectively, we obtain new exact results for the first four statistical cumulants of the third and fourth CTC. All analytical results are supported by numerical simulations of the circular ensembles.
Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach
ERIC Educational Resources Information Center
Holmes, Karen Y.; Dodd, Brett A.
2012-01-01
In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)
NASA Astrophysics Data System (ADS)
Shech, Elay
2015-09-01
This paper looks at the nature of idealizations and representational structures appealed to in the context of the fractional quantum Hall effect, specifically, with respect to the emergence of anyons and fractional statistics. Drawing on an analogy with the Aharonov-Bohm effect, it is suggested that the standard approach to the effects—(what we may call) the topological approach to fractional statistics—relies essentially on problematic idealizations that need to be revised in order for the theory to be explanatory. An alternative geometric approach is outlined and endorsed. Roles for idealizations in science, as well as consequences for the debate revolving around so-called essential idealizations, are discussed.
NASA Astrophysics Data System (ADS)
Sherwood, S. C.; Fuchs, D.; Bony, S.; Jean-Louis, D.
2014-12-01
We describe two avenues for constraining the sensitivity of the climate system to external perturbations, using present-day observations. The first is physically motivated, based on recently published work showing that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming. However, this result would be wrong if there were an important feedback in the real world that was missing from all the models. The second approach is based on application of the fluctuation-dissipation theorem to climate models, to predict the three-dimensional equilibrium response to heating perturbations via a statistical model of the system fitted to data from a control run. We expand on previous applications of this technique for such problems by considering multivariate state vectors, showing that this improves skill and makes it possible to train skillful operators on data records of comparable length to what is available from satellite observations. We also present a new methodology for treating non-stationary processes, in particular the existence of a seasonal cycle, and show that we can obtain similar results with a realistic seasonal cycle as with an idealised non-seasonally-varying case. We focus specifically on the ability to predict how clouds in the model will respond to a forced climate change
ERIC Educational Resources Information Center
Schoenborn, Charlotte A.
This report is based on data from the 1988 National Health Interview Survey on Alcohol (NHIS-Alcohol), part of the ongoing National Health Interview Survey conducted by the National Center for Health Statistics. Interviews for the NHIS are conducted in person by staff of the United States Bureau of the Census. Information is collected on each…
Exploring Advanced Piano Students' Approaches to Sight-Reading
ERIC Educational Resources Information Center
Zhukov, Katie
2014-01-01
The ability to read music fluently is fundamental for undergraduate music study yet the training of sight-reading is often neglected. This study compares approaches to sight-reading and accompanying by students with extensive sight-reading experience to those with limited experience, and evaluates the importance of this skill to advanced pianists…
Evaluating New Approaches to Teaching of Sight-Reading Skills to Advanced Pianists
ERIC Educational Resources Information Center
Zhukov, Katie
2014-01-01
This paper evaluates three teaching approaches to improving sight-reading skills against a control in a large-scale study of advanced pianists. One hundred pianists in four equal groups participated in newly developed training programmes (accompanying, rhythm, musical style and control), with pre- and post-sight-reading tests analysed using…
Sengupta, S.K.; Boyle, J.S.
1993-05-01
Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution.
Statistics of beam-driven waves in plasmas with ambient fluctuations: Reduced-parameter approach
Tyshetskiy, Yu.; Cairns, I. H.; Robinson, P. A.
2008-09-15
A reduced-parameter (RP) model of quasilinear wave-plasma interactions is used to analyze statistical properties of beam-driven waves in plasmas with ambient density fluctuations. The probability distribution of wave energies in such a system is shown to have a relatively narrow peak just above the thermal wave level, and a power-law tail at high energies, the latter becoming progressively more evident for increasing characteristic amplitude of the ambient fluctuations. To better understand the physics behind these statistical features of the waves, a simplified model of stochastically driven thermal waves is developed on the basis of the RP model. An approximate analytic solution for stationary statistical distribution of wave energies W is constructed, showing a good agreement with that of the original RP model. The 'peak' and 'tail' features of the wave energy distribution are shown to be a result of contributions of two groups of wave clumps: those subject to either very slow or very fast random variations of total wave growth rate (due to fluctuations of ambient plasma density), respectively. In the case of significant ambient plasma fluctuations, the overall wave energy distribution is shown to have a clear power-law tail at high energies, P(W){proportional_to}W{sup -{alpha}}, with nontrivial exponent 1<{alpha}<2, while for weak fluctuations it is close to the lognormal distribution predicted by pure stochastic growth theory. The model's wave statistics resemble the statistics of plasma waves observed by the Ulysses spacecraft in some interplanetary type III burst sources. This resemblance is discussed qualitatively, and it is suggested that the stochastically driven thermal waves might be a candidate for explaining the power-law tails in the observed wave statistics without invoking mechanisms such as self-organized criticality or nonlinear wave collapse.
A matrix lie group approach to statistical shape analysis of bones.
Hefny, Mohamed S; Rudan, John F; Ellis, Randy E
2014-01-01
Statistical shape models using a principal-component analysis are inadequate for studying shapes that are in non-linear manifolds. Principal tangent components use a matrix Lie group that maps a non-linear manifold to a corresponding linear tangent space. Computations that are performed on the tangent space of the manifold use linear statistics to analyze non-linear shape spaces. The method was tested on bone surface from proximal femurs. Using only three components, the new model recovered 94% of the medical dataset, whereas a conventional method that used linear principal components needed 24 components to achieve the same reconstruction accuracy. PMID:24732500
Advanced subsonic transport approach noise: The relative contribution of airframe noise
NASA Technical Reports Server (NTRS)
Willshire, William L., Jr.; Garber, Donald P.
1992-01-01
With current engine technology, airframe noise is a contributing source for large commercial aircraft on approach, but not the major contributor. With the promise of much quieter jet engines with the planned new generation of high-by-pass turbofan engines, airframe noise has become a topic of interest in the advanced subsonic transport research program. The objective of this paper is to assess the contribution of airframe noise relative to the other aircraft noise sources on approach. The assessment will be made for a current technology large commercial transport aircraft and for an envisioned advanced technology aircraft. NASA's Aircraft Noise Prediction Program (ANOPP) will be used to make total aircraft noise predictions for these two aircraft types. Predicted noise levels and areas of noise contours will be used to determine the relative importance of the contributing approach noise sources. The actual set-up decks used to make the ANOPP runs for the two aircraft types are included in appendixes.
NASA Astrophysics Data System (ADS)
Cao, X. C.; Wu, P. T.; Wang, Y. B.; Zhao, X. N.
2014-01-01
The aim of this study is to estimate the green and blue water footprint of wheat, distinguishing the irrigated and rain-fed crop, from a production perspective. The assessment herein focuses on China and improves upon earlier research by taking a crop-model-coupled-statistics approach to estimate the water footprint of the crop in 30 provinces. We have calculated the water footprint at regional scale based on the actual data collected from 442 typical irrigation districts. Crop evapotranspiration and the water conveyance loss are both considered in calculating irrigated water footprint at the regional scale. We have also compared water footprint of per unit product between irrigated and rain-fed crops and analyzed the relationship between promoting yield and saving water resources. The national wheat production in the year 2010 takes about 142.5 billion cubic meters of water. The major portion of WF (80.9%) comes from the irrigated farmland and the remaining 19.1% falls into the rain-fed. Green water (50.3%) and blue water (49.7%) carry almost equal shares of water footprint (WF) in total cropland WF. Green water dominates the south of the Yangtze River, whereas low green water proportions relate themselves to the provinces located in the north China especially northwest China. Approximately 38.5% of the water footprint related to the production of wheat is not consumed in the form of crop evapotranspiration but of conveyance loss during irrigation process. Proportions of blue water for conveyance loss (BWCL) in the arid Xinjiang, Ningxia and Neimenggu (Inner Mongolia) exceed 40% due to low irrigation efficiency. The national average water footprint of wheat per unit of crop (WFP) is 1.237 m3 kg-1 in 2010. There exists a big difference in WFP among provinces. Compared to the rain-fed cultivation (with no irrigation), irrigation has promoted crop yield, both provincially and up by about 170% nationally. As a result, more water resources are demanded in irrigated
Integrating Real-Life Data Analysis in Teaching Descriptive Statistics: A Constructivist Approach
ERIC Educational Resources Information Center
Libman, Zipora
2010-01-01
This article looks at a process of integrating real-life data investigation in a course on descriptive statistics. Referring to constructivist perspectives, this article suggests a look at the potential of inculcating alternative teaching methods that encourage students to take a more active role in their own learning and participate in the…
Piloting a Blended Approach to Teaching Statistics in a College of Education: Lessons Learned
ERIC Educational Resources Information Center
Xu, Yonghong Jade; Meyer, Katrina A.; Morgan, Dianne
2008-01-01
This study investigated the performance of graduate students enrolled in introductory statistics courses. The course in Fall 2005 was delivered in a traditional face-to-face manner and the same course in Fall 2006 was blended by using an online commercial tutoring system (ALEKS) and making attendance of several face-to-face classes optional. There…
ERIC Educational Resources Information Center
König, Johannes
2015-01-01
The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME) of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged…
A Powerful Statistical Approach for Large-Scale Differential Transcription Analysis
Tan, Yuan-De; Chandler, Anita M.; Chaudhury, Arindam; Neilson, Joel R.
2015-01-01
Next generation sequencing (NGS) is increasingly being used for transcriptome-wide analysis of differential gene expression. The NGS data are multidimensional count data. Therefore, most of the statistical methods developed well for microarray data analysis are not applicable to transcriptomic data. For this reason, a variety of new statistical methods based on count data of transcript reads have been correspondingly proposed. But due to high cost and limitation of biological resources, current NGS data are still generated from a few replicate libraries. Some of these existing methods do not always have desirable performances on count data. We here developed a very powerful and robust statistical method based on beta and binomial distributions. Our method (mBeta t-test) is specifically applicable to sequence count data from small samples. Both simulated and real transcriptomic data showed mBeta t-test significantly outperformed the existing top statistical methods chosen in all 12 given scenarios and performed with high efficiency and high stability. The differentially expressed genes found by our method from real transcriptomic data were validated by qPCR experiments. Our method shows high power in finding truly differential expression, conservatively estimating FDR and high stability in RNA sequence count data derived from small samples. Our method can also be extended to genome-wide detection of differential splicing events. PMID:25894390
The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...
NOVEL STATISTICAL APPROACH TO EVALUATE SPATIAL DISTRIBUTION OF PM FROM SPECIFIC SOURCE CATEGORIES
This task addresses aspects of NRC recommendations 10A and 10B. Positive matrix factorization (PMF) is a new statistical techniques for determining the daily contribution to PM mass of specific source categories (auto exhaust, smelters, suspended soil, secondary sulfate, etc.). I...
NASA Astrophysics Data System (ADS)
Liu, Zhonghua; Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao
2011-06-01
The statistical distribution of image patch exemplars has been shown to be an effective approach to texture classification. In this paper, the joint distribution of pairs of patches for texture classification from single images is investigated. We developed a statistical method of examining texture that considers the spatial relationship of image patches, which is called the quantized patches co-occurrence matrix (QPCM). In our method, the images are first slipt into small image patches, and then the patches are quantized to the closest patch cluster centers (textons) which is learned form training images. By calculating how often pairs of patches with specific quantized values (texton labels) and in a specified spatial relationship occur in an image, we create the QPCM for images representation. Moreover, we developed a fusion framework for texture classification by fusing 4 QPCM functions with specified neighboring spatial relationship and 3 other statistical representations of image patches, which is called QPCM-SVM classifier. The effectiveness of the proposed texture classification methodology is demonstrated via an extensive consistent evaluation in standard benchmarks that clearly shows better performance against state-of-the-art statistical approach using image patch exemplars.
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. PMID:25066170
Rodrigo, C.; Rodrigo, M.; Dunne, K.; Morgan, L.
1998-07-01
Typical wetland creations are based on sizable surface water input provided by stream diversion or large surface water run-on inputs to enhance the success of the establishing the wetland hydrology. However, not all landscapes provide sizable hydrological inputs from these sources. This paper presents a case history and statistical approach adopted to model groundwater for a wetland created in a landscape position which does not allow for the use of surface water inputs.
ATWS Analysis with an Advanced Boiling Curve Approach within COBRA 3-CP
Gensler, A.; Knoll, A.; Kuehnel, K.
2007-07-01
In 2005 the German Reactor Safety Commission issued specific requirements on core coolability demonstration for PWR ATWS (anticipated transients without scram). Thereupon AREVA NP performed detailed analyses for all German PWRs. For a German KONVOI plant the results of an ATWS licensing analysis are presented. The plant dynamic behavior is calculated with NLOOP, while the hot channel analysis is performed with the thermal hydraulic computer code COBRA 3-CP. The application of the fuel rod model included in COBRA 3-CP is essential for this type of analysis. Since DNB (departure from nucleate boiling) occurs, the advanced post DNB model (advanced boiling curve approach) of COBRA 3-CP is used. The results are compared with those gained with the standard BEEST model. The analyzed ATWS case is the emergency power case 'loss of main heat sink with station service power supply unavailable'. Due to the decreasing coolant flow rate during the transient the core attains film boiling conditions. The results of the hot channel analysis strongly depend on the performance of the boiling curve model. The BEEST model is based on pool boiling conditions whereas typical PWR conditions - even in most transients - are characterized by forced flow for which the advanced boiling curve approach is particularly suitable. Compared with the BEEST model the advanced boiling curve approach in COBRA 3-CP yields earlier rewetting, i.e. a shorter period in film boiling. Consequently, the fuel rod cladding temperatures, that increase significantly due to film boiling, drop back earlier and the high temperature oxidation is significantly diminished. The Baker-Just-Correlation was used to calculate the value of equivalent cladding reacted (ECR), i.e. the reduction of cladding thickness due to corrosion throughout the transient. Based on the BEEST model the ECR value amounts to 0.4% whereas the advanced boiling curve only leads to an ECR value of 0.2%. Both values provide large margins to the 17
Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G
2013-03-01
Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season. PMID:23261475
A Systematic Approach to Conducting a Non-statistical Meta-analysis of Research Literature.
ERIC Educational Resources Information Center
Bland, Carole J.; And Others
1995-01-01
A rigorous approach to conducting nonstatistical meta-analyses of research literature is presented and illustrated in a study of literature on primary care medical specialty choice. The approach described includes model development, literature retrieval and coding, quality rating, annotation of high-quality references, and synthesizing the subset…
Current approaches to the treatment of advanced-stage Hodgkin's disease.
Rusthoven, J J
1986-01-01
Combination chemotherapy (CT) has been the mainstay of treatment of advanced-stage Hodgkin's disease since the late 1960s. Although treatment with MOPP (nitrogen mustard, vincristine sulfate [Oncovin], procarbazine and prednisone) has resulted in long-term disease-free survival rates exceeding 50%, newer approaches have been studied to improve on this success rate and to reduce the toxic effects associated with MOPP. Prognostic factors have now been defined that identify patients who may require more aggressive treatment; they include age greater than 40 years, presence of B symptoms and more advanced (especially extranodal) disease. A small number of patients with pathological stage III disease may still be successfully treated with extensive radiotherapy (RT) alone. Among patients with advanced-stage disease, significantly better therapeutic results are being obtained with newer treatment approaches than with MOPP, particularly in patients with factors that predict a poor outcome. These newer approaches include combination CT plus RT, alternating cycles of two non-cross-resistant CT regimens and hybrid regimens, which combine agents from two different CT regimens in one cycle. The prognosis of patients who suffer relapse after combination CT remains poor, even with newer drug regimens. The newer treatment approaches may well lead to better cure rates and fewer short-term and long-term toxic effects. PMID:2427176
Multivariate statistical approach to a data set of dioxin and furan contaminations in human milk
Lindstrom, G.U.M.; Sjostrom, M.; Swanson, S.E. ); Furst, P.; Kruger, C.; Meemken, H.A.; Groebel, W. )
1988-05-01
The levels of chlorinated dibenzodioxins, PCDDs, and dibenzofurans, PCDFs, in human milk have been of great concern after the discovery of the toxic 2,3,7,8-substituted isomers in milk of European origin. As knowledge of environmental contamination of human breast milk increases, questions will continue to be asked about possible risks from breast feeding. Before any recommendations can be made, there must be knowledge of contaminant levels in mothers' breast milk. Researchers have measured PCB and 17 different dioxins and furans in human breast milk samples. To date the data has only been analyzed by univariate and bivariate statistical methods. However to extract as much information as possible from this data set, multivariate statistical methods must be used. Here the authors present a multivariate analysis where the relationships between the polychlorinated compounds and the personalia of the mothers have been studied. For the data analysis partial least squares (PLS) modelling has been used.
A statistical approach to the extraction of the seismic propagating wavelet
Angeleri, G.P.
1983-10-01
A model of the seismic trace is generally given as a convolution between the propagating wavelet and the reflectivity series of the earth and normally it is assumed that a white noise is added to the trace. The knowledge of the propagating wavelet is the basic point to estimate the reflectivity series from the seismic trace. In this paper a statistical method of wavelet extraction from several seismic traces, assuming the wavelet to be unique, is discussed. This method allows one to obtain the propagating wavelet without any classical limitative assumptions on the phase spectrum. Furthermore, a phase unwrapping method is suggested and some statistical properties of the phase spectrum of the reflectivity traces are examined.
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
NASA Astrophysics Data System (ADS)
Liu, Wenjia; Schmittmann, Beate; Zia, R. K. P.
2012-02-01
Network studies have played a central role for understanding many systems in nature - e.g., physical, biological, and social. So far, much of the focus has been the statistics of networks in isolation. Yet, many networks in the world are coupled to each other. Recently, we considered this issue, in the context of two interacting social networks. In particular, We studied networks with two different preferred degrees, modeling, say, introverts vs. extroverts, with a variety of ``rules for engagement.'' As a first step towards an analytically accessible theory, we restrict our attention to an ``extreme scenario'': The introverts prefer zero contacts while the extroverts like to befriend everyone in the society. In this ``maximally frustrated'' system, the degree distributions, as well as the statistics of cross-links (between the two groups), can depend sensitively on how a node (individual) creates/breaks its connections. The simulation results can be reasonably well understood in terms of an approximate theory.
Statistical shape analysis using 3D Poisson equation-A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. PMID:26874288
NASA Astrophysics Data System (ADS)
Tasaki, Hal
2016-04-01
Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and the "apparatus" which operates on the former, and assume that the whole system evolves autonomously. This provides a satisfactory derivation of the second law for macroscopic systems.
Changes in Wave Climate from a Multi-model Global Statistical projection approach.
NASA Astrophysics Data System (ADS)
Camus, Paula; Menendez, Melisa; Perez, Jorge; Losada, Inigo
2016-04-01
Despite their outstanding relevance in coastal impacts related to climate change (i.e. inundation, global beach erosion), ensemble products of global wave climate projections from the new Representative Concentration Pathways (RCPs) described by the IPCC are rather limited. This work shows a global study of changes in wave climate under several scenarios in which a new statistical method is applied. The method is based on the statistical relationship between meteorological conditions over the geographical area of wave generation (predictor) and the resulting wave characteristics for a particular location (predictand). The atmospheric input variables used in the statistical method are sea level pressure anomalies and gradients over the spatial and time scales information characterized by ESTELA maps (Perez et al. 2014). ESTELA provides a characterization of the area of wave influence of any particular ocean location worldwide, which includes contour lines of wave energy and isochrones of travel time in that area. Principal components is then applied over the sea level pressure information of the ESTELA region in order to define a multi-regression statistical model based on several data mining techniques. Once the multi-regression technique is defined and validated from historical information of atmospheric reanalysis (predictor) and wave hindcast (predictand) this method has been applied by using more than 35 Global Climate Models from CMIP5 to estimate changes in several parameters of the sea state (e.g. significant wave height, peak period) at seasonal and annual scale during the last decades of 21st century. The uncertainty of the estimated wave climate changes in the ensemble is also provided and discussed.
Tasaki, Hal
2016-04-29
Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and the "apparatus" which operates on the former, and assume that the whole system evolves autonomously. This provides a satisfactory derivation of the second law for macroscopic systems. PMID:27176507
NASA Astrophysics Data System (ADS)
Cañón, Julio; Domínguez, Francina; Valdés, Juan B.
2011-02-01
SummaryA statistical method is introduced to downscale hydroclimatic variables while incorporating the variability associated with quasi-periodic global climate signals. The method extracts statistical information of distributed variables from historic time series available at high resolution and uses Multichannel Singular Spectrum Analysis (MSSA) to reconstruct, on a cell-by-cell basis, specific frequency signatures associated with both the variable at a coarse scale and the global climate signals. Historical information is divided in two sets: a reconstruction set to identify the dominant modes of variability of the series for each cell and a validation set to compare the downscaling relative to the observed patterns. After validation, the coarse projections from Global Climate Models (GCMs) are disaggregated to higher spatial resolutions by using an iterative gap-filling MSSA algorithm to downscale the projected values of the variable, using the distributed series statistics and the MSSA analysis. The method is data adaptive and useful for downscaling short-term forecasts as well as long-term climate projections. The method is applied to the downscaling of temperature and precipitation from observed records and GCM projections over a region located in the US Southwest, taking into account the seasonal variability associated with ENSO.
A statistical approach to estimate O3 uptake of ponderosa pine in a mediterranean climate.
Grulke, N E; Preisler, H K; Fan, C C; Retzlaff, W A
2002-01-01
In highly polluted sites, stomatal behavior is sluggish with respect to light, vapor pressure deficit, and internal CO2 concentration (Ci) and poorly described by existing models. Statistical models were developed to estimate stomatal conductance (gs) of 40-year-old ponderosa pine at three sites differing in pollutant exposure for the purpose of calculating O3 uptake. Gs was estimated using julian day, hour of day, pre-dawn xylem potential and photosynthetic photon flux density (PPFD). The median difference between estimated and observed field gs did not exceed 10 mmol H2O m(-2) s(-1), and estimated gs within 95% confidence intervals. 03 uptake was calculated from hourly estimated gs, hourly O3 concentration, and a constant to correct for the difference in diffusivity between water vapor and 03. The simulation model TREGRO was also used to calculate the cumulative 03 uptake at all three sites. 03 uptake estimated by the statistical model was higher than that simulated by TREGRO because gas exchange rates were proportionally higher. O3 exposure and uptake were significantly correlated (r2>0.92), because O3 exposure and gs were highly correlated in both statistical and simulation models. PMID:12152824
DEVELOPMENT OF AN ADVANCED APPROACH FOR NEXT-GENERATION INTEGRATED RESERVOIR CHARACTERIZATION
Scott R. Reeves
2005-04-01
Accurate, high-resolution, three-dimensional (3D) reservoir characterization can provide substantial benefits for effective oilfield management. By doing so, the predictive reliability of reservoir flow models, which are routinely used as the basis for investment decisions involving hundreds of millions of dollars and designed to recover millions of barrels of oil, can be significantly improved. Even a small improvement in incremental recovery for high-value assets can result in important contributions to bottom-line profitability. Today's standard practice for developing a 3D reservoir description is to use seismic inversion techniques. These techniques make use of geostatistics and other stochastic methods to solve the inverse problem, i.e., to iteratively construct a likely geologic model and then upscale and compare its acoustic response to that actually observed in the field. This method has several inherent flaws, such as: (1) The resulting models are highly non-unique; multiple equiprobable realizations are produced, meaning (2) The results define a distribution of possible outcomes; the best they can do is quantify the uncertainty inherent in the modeling process, and (3) Each realization must be run through a flow simulator and history matched to assess it's appropriateness, and therefore (4) The method is labor intensive and requires significant time to complete a field study; thus it is applied to only a small percentage of oil and gas producing assets. A new approach to achieve this objective was first examined in a Department of Energy (DOE) study performed by Advanced Resources International (ARI) in 2000/2001. The goal of that study was to evaluate whether robust relationships between data at vastly different scales of measurement could be established using virtual intelligence (VI) methods. The proposed workflow required that three specific relationships be established through use of artificial neural networks (ANN's): core-to-log, log
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-08-15
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = -0.002, mean squared error = 0.025; PTDM: bias = -1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995-2008). PMID:27455963
A statistical approach to develop a detailed soot growth model using PAH characteristics
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael; Sander, Markus; Patterson, Robert; West, Richard; Kraft, Markus
2009-04-15
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. The KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)
Power-law distributions in economics: a nonextensive statistical approach (Invited Paper)
NASA Astrophysics Data System (ADS)
Duarte Queiros, Silvio M.; Anteneodo, Celia; Tsallis, Constantino
2005-05-01
The cornerstone of Boltzmann-Gibbs (BG) statistical mechanics is the Boltzmann-Gibbs-Jaynes-Shannon entropy SBG≡ -k sh dx f(x) ln f(x), where k is a positive constant and f(x) a probability density function. This theory has exibited, along more than one century, great success in the treatment of systems where short spatio/temporal correlations dominate. There are, however, anomalous natural and artificial systems that violate the basic requirements for its applicability. Different physical entropies, other than the standard one, appear to be necessary in order to satisfactorily deal with such anomalies. One of such entropies is Sq ≡ k (1-sh dx [f(x)]q)=(1-q) (with S1 = SBG), where the entropic index q is a real parameter. It has been proposed as the basis for a generalization, referred to as nonextensive statistical mechanics, of the BG theory. Sq shares with SBG four remarkable properties, namely concavity (8q > 0), Lesche-stability (8q > 0), finiteness of the entropy production per unit time (q 2 <), and additivity (for at least a compact support of q including q = 1). The simultaneous validity of these properties suggests that Sq is appropriate for bridging, at a macroscopic level, with classical thermodynamics itself. In the same natural way that exponential probability functions arise in the standard context, power-law tailed distributions, even with exponents out of the Levy range, arise in the nonextensive framework. In this review, we intend to show that many processes of interest in economy, for which fat-tailed probability functions are empirically observed, can be described in terms of the statistical mechanisms that underly the nonextensive theory.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Magnitude-frequency relations for earthquakes using a statistical mechanical approach
Rundle, J.B.
1993-12-10
At very small magnitudes, observations indicate that the frequency of occurrence of earthquakes is significantly smaller than the frequency predicted by simple Gutenberg-Richter statistics. Previously, it has been suggested that the dearth of small events is related to a rapid rise in scattering and attenuation at high frequencies and the consequent inability to detect these events with standard arrays of seismometers. However, several recent studies have suggested that instrumentation cannot account for the entire effect and that the decline in frequency may be real. Working from this hypothesis, we derive a magnitude-frequency relation for very small earthquakes that is based upon the postulate that the system of moving plates can be treated as a system not too far removed from equilibrium. As a result, it is assumed that in the steady state, the probability P[E] that a segment of fault has a free energy E is proportional to the exponential of the free energy P {proportional_to} exp[-E / E{sub N}]. In equilibrium statistical mechanics this distribution is called the Boltzmann distribution. The probability weight E{sub N} is the space-time steady state average of the free energy of the segment. Earthquakes are then treated as fluctuations in the free energy of the segments. With these assumptions, it is shown that magnitude-frequency relations can be obtained. For example, previous results obtained by the author can be recovered under the same assumptions as before, for intermediate and large events, the distinction being whether the event is of a linear dimension sufficient to extend the entire width of the brittle zone. Additionally, a magnitude-frequency relation is obtained that is in satisfactory agreement with the data at very small magnitudes. At these magnitudes, departures from frequencies predicted by Gutenberg-Richter statistics are found using a model that accounts for the finite thickness of the inelastic part of the fault zone.
Firing statistics and correlations in spiking neurons: A level-crossing approach
NASA Astrophysics Data System (ADS)
Badel, Laurent
2011-10-01
We present a time-dependent level-crossing theory for linear dynamical systems perturbed by colored Gaussian noise. We apply these results to approximate the firing statistics of conductance-based integrate-and-fire neurons receiving excitatory and inhibitory Poissonian inputs. Analytical expressions are obtained for three key quantities characterizing the neuronal response to time-varying inputs: the mean firing rate, the linear response to sinusoidally modulated inputs, and the pairwise spike correlation for neurons receiving correlated inputs. The theory yields tractable results that are shown to accurately match numerical simulations and provides useful tools for the analysis of interconnected neuronal populations.
Large-scale statistics of the Kuramoto-Sivashinsky equation: A wavelet-based approach
Elezgaray, J.; Berkooz, G.; Holmes, P. ||
1996-07-01
We show that the statistical properties of the large scales of the Kuramoto-Sivashinsky equation in the extended system limit can be understood in terms of the dynamical behavior of the same equation in a small finite domain. Our method relies on the description of the solutions of this equation in terms of wavelets, and allows us to model the energy transfer between small and large scales. We show that the effective equation obtained in this way can be consistently approximated by a forced Burgers equation only for scales far from the cutoff between small and large wavelengths. {copyright} {ital 1996 The American Physical Society.}
Ness, Robert O; Sachs, Karen; Vitek, Olga
2016-03-01
Causal inference, the task of uncovering regulatory relationships between components of biomolecular pathways and networks, is a primary goal of many high-throughput investigations. Statistical associations between observed protein concentrations can suggest an enticing number of hypotheses regarding the underlying causal interactions, but when do such associations reflect the underlying causal biomolecular mechanisms? The goal of this perspective is to provide suggestions for causal inference in large-scale experiments, which utilize high-throughput technologies such as mass-spectrometry-based proteomics. We describe in nontechnical terms the pitfalls of inference in large data sets and suggest methods to overcome these pitfalls and reliably find regulatory associations. PMID:26731284
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Swiecicki, Paul L; Malloy, Kelly M; Worden, Francis P
2016-01-01
Oropharyngeal cancer accounts for approximately 2.8% of newly cancer cases. Although classically a tobacco related disease, most cases today are related to infection with human papilloma virus (HPV) and present with locally advanced tumors. HPV related tumors have been recognized as a molecularly distinct entity with higher response rates to therapy, lower rates of relapse, and improved overall survival. Treatment of oropharyngeal cancer entails a multi-disciplinary approach with concomitant chemoradiation. The role of induction chemotherapy in locally advanced tumors continues to be controversial however large studies have demonstrated no difference in survival or time to treatment failure. Surgical approaches may be employed with low volume oropharyngeal cancers and with development new endoscopic tools, more tumors are able to be resected via an endoscopic approach. Given advances in the understanding of HPV related oropharyngeal cancer, ongoing research is looking at ways to minimize toxicities via de-intensification of therapy. Unfortunately, some patients develop recurrent or metastatic disease. Novel therapeutics are currently being investigated for this patient population including immunotherapeutics. This review discusses the current understanding of the pathogenesis of oropharyngeal cancer and treatment. We also discuss emerging areas of research as it pertains to de-intensification as well novel therapeutics for the management of metastatic disease. PMID:26862488
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
F.R. Carrillo-Pedroza; A. Davalos Sanchez; M. Soria-Aguilar; E.T. Pecina Trevino
2009-07-15
The removal of pyritic sulfur from a Mexican sub-bituminous coal in nitric, sulfuric, and hydrochloric acid solutions was investigated. The effect of the type and concentration of acid, in the presence of hydrogen peroxide and ozone as oxidants, in a temperature range of 20-60{sup o}C, was studied. The relevant factors in pyrite dissolution were determined by means of the statistical analysis of variance and optimized by the response surface method. Kinetic models were also evaluated, showing that the dissolution of pyritic sulfur follows the kinetic model of the shrinking core model, with diffusion through the solid product of the reaction as the controlling stage. The results of statistical analysis indicate that the use of ozone as an oxidant improves the pyrite dissolution because, at 0.25 M HNO{sub 3} or H{sub 2}SO{sub 4} at 20{sup o}C and 0.33 g/h O{sub 3}, the obtained dissolution is similar to that of 1 M H{sub 2}O{sub 2} and 1 M HNO{sub 3} or H{sub 2}SO{sub 4} at 40{sup o}C. 42 refs., 9 figs., 3 tabs.
A statistical and experimental approach for assessing the preservation of plant lipids in soil
NASA Astrophysics Data System (ADS)
Mueller, K. E.; Eissenstat, D. M.; Oleksyn, J.; Freeman, K. H.
2011-12-01
Plant-derived lipids contribute to stable soil organic matter, but further interpretations of their abundance in soils are limited because the factors that control lipid preservation are poorly understood. Using data from a long-term field experiment and simple statistical models, we provide novel constraints on several predictors of the concentration of hydrolyzable lipids in forest mineral soils. Focal lipids included common monomers of cutin, suberin, and plant waxes present in tree leaves and roots. Soil lipid concentrations were most strongly influenced by the concentrations of lipids in leaves and roots of the overlying trees, but were also affected by the type of lipid (e.g. alcohols vs. acids), lipid chain length, and whether lipids originated in leaves or roots. Collectively, these factors explained ~80% of the variation in soil lipid concentrations beneath 11 different tree species. In order to use soil lipid analyses to test and improve conceptual models of soil organic matter stabilization, additional studies that provide experimental and quantitative (i.e. statistical) constraints on plant lipid preservation are needed.
A statistical approach to the brittle fracture of a multi-phase solid
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lua, Y. I.; Belytschko, T.
1991-01-01
A stochastic damage model is proposed to quantify the inherent statistical distribution of the fracture toughness of a brittle, multi-phase solid. The model, based on the macrocrack-microcrack interaction, incorporates uncertainties in locations and orientations of microcracks. Due to the high concentration of microcracks near the macro-tip, a higher order analysis based on traction boundary integral equations is formulated first for an arbitrary array of cracks. The effects of uncertainties in locations and orientations of microcracks at a macro-tip are analyzed quantitatively by using the boundary integral equations method in conjunction with the computer simulation of the random microcrack array. The short range interactions resulting from surrounding microcracks closet to the main crack tip are investigated. The effects of microcrack density parameter are also explored in the present study. The validity of the present model is demonstrated by comparing its statistical output with the Neville distribution function, which gives correct fits to sets of experimental data from multi-phase solids.
Statistical approach of chemistry and topography effect on human osteoblast adhesion.
Giljean, S; Ponche, A; Bigerelle, M; Anselme, K
2010-09-15
Our objective in this work was to determine statistically the relative influence of surface topography and surface chemistry of metallic substrates on long-term adhesion of human bone cell quantified by the adhesion power (AP). Pure titanium, titanium alloy, and stainless steel substrates were processed with electro-erosion, sandblasting, or polishing giving various morphologies and amplitudes. The surface chemistry was characterized by X-ray photoelectron spectroscopy (XPS) associated with an extensive analysis of surface topography. The statistical analysis demonstrated that the effect on AP of the material composition was not significant. More, no correlation was found between AP and the surface element concentrations determined by XPS demonstrating that the surface chemistry was not an influencing parameter for long-term adhesion. In the same way, the roughness amplitude, independently of the process, had no influence on AP, meaning that roughness amplitude is not an intrinsic parameter of long-term adhesion. On the contrary, the elaboration process alone had a significant effect on AP. For a same surface elaboration process, the number of inflexion points, or G parameter, was the most pertinent roughness parameter for describing the topography influence on long-term adhesion. Thus, more the inflexion points, more the discontinuities, higher the long-term adhesion. PMID:20694978
NASA Astrophysics Data System (ADS)
Chandrasekaran, A.; Ravisankar, R.; Harikrishnan, N.; Satapathy, K. K.; Prasad, M. V. R.; Kanagasabapathy, K. V.
2015-02-01
Anthropogenic activities increase the accumulation of heavy metals in the soil environment. Soil pollution significantly reduces environmental quality and affects the human health. In the present study soil samples were collected at different locations of Yelagiri Hills, Tamilnadu, India for heavy metal analysis. The samples were analyzed for twelve selected heavy metals (Mg, Al, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni and Zn) using energy dispersive X-ray fluorescence (EDXRF) spectroscopy. Heavy metals concentration in soil were investigated using enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF) and pollution load index (PLI) to determine metal accumulation, distribution and its pollution status. Heavy metal toxicity risk was assessed using soil quality guidelines (SQGs) given by target and intervention values of Dutch soil standards. The concentration of Ni, Co, Zn, Cr, Mn, Fe, Ti, K, Al, Mg were mainly controlled by natural sources. Multivariate statistical methods such as correlation matrix, principal component analysis and cluster analysis were applied for the identification of heavy metal sources (anthropogenic/natural origin). Geo-statistical methods such as kirging identified hot spots of metal contamination in road areas influenced mainly by presence of natural rocks.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
Kulesz, Paulina A.; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M.; Francis, David J.
2015-01-01
Objective Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Method Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product moment correlation was compared with four robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator Results All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Conclusions Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PMID:25495830
Zaman, A.A.; McNally, T.W.; Fricke, A.L.
1998-01-01
Vapor-liquid equilibria and boiling point elevation of slash pine kraft black liquors over a wide range of solid concentrations (up to 85% solids) has been studied. The liquors are from a statistically designed pulping experiment for pulping slash pine in a pilot scale digester with four cooking variables of effective alkali, sulfidity, cooking time, and cooking temperature. It was found that boiling point elevation of black liquors is pressure dependent, and this dependency is more significant at higher solids concentrations. The boiling point elevation data at different solids contents (at a fixed pressure) were correlated to the dissolved solids (S/(1 {minus} S)) in black liquor. Due to the solubility limit of some of the salts in black liquor, a change in the slope of the boiling point elevation as a function of the dissolved solids was observed at a concentration of around 65% solids. An empirical method was developed to describe the boiling point elevation of each liquor as a function of pressure and solids mass fraction. The boiling point elevation of slash pine black liquors was correlated quantitatively to the pulping variables, using different statistical procedures. These predictive models can be applied to determine the boiling point rise (and boiling point) of slash pine black liquors at processing conditions from the knowledge of pulping variables. The results are presented, and their utility is discussed.
Multivariate statistical approach to estimate mixing proportions for unknown end members
NASA Astrophysics Data System (ADS)
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-08-01
SummaryA multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
Statistical approaches to nonstationary EEGs for the detection of slow vertex responses.
Fujikake, M; Ninomija, S P; Fujita, H
1989-06-01
A slow vertex response (SVR) is an electric auditory evoked response used for an objective hearing power test. One of the aims of an objective hearing power test is to find infants whose hearing is less than that of normal infants. Early medical treatment is important for infants with a loss of hearing so that they do not have retarded growth. To measure SVRs, we generally use the averaged summation method of an electroencephalogram (EEG), because the signal-to-noise ratio (SVR to EEG and etc.) is very poor. To increase the reliability and stability of measured SVRs, and at the same time, to make the burden of testing light, it is necessary to device an effective measurement method of SVR. Two factors must be considered: (1) SVR waveforms change following the changes of EEGs caused by sleeping and (2) EEGs are considered as nonstationary data in prolonged measurement. In this paper, five statistical methods are used on two different models; a stationary model and a nonstationary model. Through the comparison of waves obtained by each method, we will clarify the statistical characteristics of the original data (EEGs including SVRs), and consider the conditions that effect the measurement method of an SVR. PMID:2794816
NASA Astrophysics Data System (ADS)
Reese, Erik D.; Kawahara, H.; Kitayama, T.; Sasaki, S.; Suto, Y.
2009-01-01
Motivated by cosmological hydrodynamic simulations, the intracluster medium (ICM) inhomogeneity of galaxy clusters is modeled statistically with a lognormal model for density inhomogeneity. Through mock observations of synthetic clusters the relationship between density inhomogeneities and that of the X-ray surface brightness has been developed. This enables one to infer the statistical properties of the fluctuations of the underlying three-dimensional density distribution of real galaxy clusters from X-ray observations. We explore inhomogeneity in the intracluster medium by applying the above methodology to Chandra observations of a sample of nearby galaxy clusters. We also consider extensions of the model, including Poissonian effects and compare this hybrid lognormal-Poisson model to the nearby cluster Chandra data. EDR gratefully acknowledges support from JSPS (Japan Society for the Promotion of Science) Postdoctoral Fellowhip for Foreign Researchers award P07030. HK is supported by Grands-in-Aid for JSPS of Science Fellows. This work is also supported by Grant-in-Aid for Scientific research of Japanese Ministry of Education, Culture, Sports, Science and Technology (Nos. 20.10466, 19.07030, 16340053, 20340041, and 20540235) and by JSPS Core-to-Core Program "International Research Network for Dark Energy".
NASA Astrophysics Data System (ADS)
Huang, Chao-Guang; Wang, Jingbo
2016-08-01
It is shown in this paper that the symplectic form for the system consisting of D-dimensional bulk Palatini gravity and SO(1, 1) BF theory on an isolated horizon as a boundary just contains the bulk term. An alternative quantization procedure for the boundary BF theory is presented. The area entropy is determined by the degree of freedom of the bulk spin network states which satisfy a suitable boundary condition. The gauge-fixing condition in the approach and the advantages of the approach are also discussed.
Probability, Information and Statistical Physics
NASA Astrophysics Data System (ADS)
Kuzemsky, A. L.
2016-03-01
In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.
A Statistical Ontology-Based Approach to Ranking for Multiword Search
ERIC Educational Resources Information Center
Kim, Jinwoo
2013-01-01
Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…
Slicing and Dicing the Genome: A Statistical Physics Approach to Population Genetics
NASA Astrophysics Data System (ADS)
Maruvka, Yosef E.; Shnerb, Nadav M.; Solomon, Sorin; Yaari, Gur; Kessler, David A.
2011-04-01
The inference of past demographic parameters from current genetic polymorphism is a fundamental problem in population genetics. The standard techniques utilize a reconstruction of the gene-genealogy, a cumbersome process that may be applied only to small numbers of sequences. We present a method that compares the total number of haplotypes (distinct sequences) with the model prediction. By chopping the DNA sequence into pieces we condense the immense information hidden in sequence space into a function for the number of haplotypes versus subsequence size. The details of this curve are robust to statistical fluctuations and are seen to reflect the process parameters. This procedure allows for a clear visualization of the quality of the fit and, crucially, the numerical complexity grows only linearly with the number of sequences. Our procedure is tested against both simulated data as well as empirical mtDNA data from China and provides excellent fits in both cases.
Quantum-statistical T-matrix approach to line broadening of hydrogen in dense plasmas
Lorenzen, Sonja; Wierling, August; Roepke, Gerd; Reinholz, Heidi; Zammit, Mark C.; Fursa, Dmitry V.; Bray, Igor
2010-10-29
The electronic self-energy {Sigma}{sup e} is an important input in a quantum-statistical theory for spectral line profile calculations. It describes the influence of plasma electrons on bound state properties. In dense plasmas, the effect of strong, i.e. close, electron-emitter collisions can be considered by three-particle T-matrix diagrams. These digrams are approximated with the help of an effective two-particle T-matrix, which is obtained from convergent close-coupling calculations with Debye screening. A comparison with other theories is carried out for the 2p level of hydrogen at k{sub B}T = 1 eV and n{sub e} = 2{center_dot}10{sup 23} m{sup -3}, and results are given for n{sub e} = 1{center_dot}10{sup 25} m{sup -3}.
Mohajeri, Leila; Aziz, Hamidi Abdul; Isa, Mohamed Hasnain; Zahed, Mohammad Ali
2010-02-01
This work studied the bioremediation of weathered crude oil (WCO) in coastal sediment samples using central composite face centered design (CCFD) under response surface methodology (RSM). Initial oil concentration, biomass, nitrogen and phosphorus concentrations were used as independent variables (factors) and oil removal as dependent variable (response) in a 60 days trial. A statistically significant model for WCO removal was obtained. The coefficient of determination (R(2)=0.9732) and probability value (P<0.0001) demonstrated significance for the regression model. Numerical optimization based on desirability function were carried out for initial oil concentration of 2, 16 and 30 g per kg sediment and 83.13, 78.06 and 69.92 per cent removal were observed respectively, compare to 77.13, 74.17 and 69.87 per cent removal for un-optimized results. PMID:19773160
Application of J-Q theory to the local approach statistical model of cleavage fracture
Yan, C.; Wu, S.X.; Mai, Y.W.
1997-12-31
A statistical model has been established to predict the fracture toughness in the lower-shelf and lower transition regions. It considers the in-plane constraint effect in terms of the two-parameter J-Q stress field. This model has been applied to predict the effect of crack depth and specimen geometry on fracture toughness and there is good agreement with experimental data. The specimens with lower in-plane constraints have a large toughness scatter due to the significant constraint loss during the loading process. The lower-bound toughness is not sensitive to crack depth and specimen geometry and this is attributed to the fact that all specimens have a similar in-plane constraint at small loads.
A statistical approach to estimate nitrogen sectorial contribution to total load.
Grizzetti, B; Bouraoui, F; de Marsily, G; Bidoglio, G
2005-01-01
This study describes a source apportionment methodology for nitrogen river transport. A statistical model has been developed to determine the contribution of each source (punctual and diffuse) of nitrogen to river-mouth transport. A non-linear regression equation was developed, relating measured nitrogen transport rates in streams to spatially referenced nitrogen sources and basin characteristics. The model considers applied fertilizer, atmospheric deposition and point discharges as sources, and winter rainfall, average air temperature, topographic wetness index and dry season flow as basin characteristics. The model was calibrated in an area of 8913 km2 in East Anglia (UK). In the studied area, the average contribution of agriculture to the nitrogen load is estimated around 71%. Point sources and atmospheric deposition respectively account for 24% and 5% of the exported nitrogen. The model allowed the estimation of the contribution of each source to nitrogen emissions and the nitrogen retention in soils and waters as influenced by basin factors. PMID:15850177
NASA Astrophysics Data System (ADS)
Taylor, Bettina B.; Taylor, Marc H.; Dinter, Tilman; Bracher, Astrid
2013-06-01
Phycobiliproteins are a family of water-soluble pigment proteins that play an important role as accessory or antenna pigments and absorb in the green part of the light spectrum poorly used by chlorophyll a. The phycoerythrins (PEs) are one of four types of phycobiliproteins that are generally distinguished based on their absorption properties. As PEs are water soluble, they are generally not captured with conventional pigment analysis. Here we present a statistical model based on in situ measurements of three transatlantic cruises which allows us to derive relative PE concentration from standardized hyperspectral underwater radiance measurements (Lu). The model relies on Empirical Orthogonal Function (EOF) analysis of Lu spectra and, subsequently, a Generalized Linear Model with measured PE concentrations as the response variable and EOF loadings as predictor variables. The method is used to predict relative PE concentrations throughout the water column and to calculate integrated PE estimates based on those profiles.
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
Osborn, Ronald E
2010-01-01
Les Mêmes Droits Pour Tous (MDT) is a human rights NGO in Guinea, West Africa that focuses on the rights of prisoners in Maison Centrale, the country's largest prison located in the capital city of Conakry. In 2007, MDT completed a survey of the prison population to assess basic legal and human rights conditions. This article uses statistical tools to explore MDT's survey results in greater depth, shedding light on human rights violations in Guinea. It contributes to human rights literature that argues for greater use of econometric tools in rights reporting, and demonstrates how human rights practitioners and academics can work together to construct an etiology of violence and torture by state actors, as physical violence is perhaps the most extreme violation of the individual's right to health. PMID:21178191
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876
Mapping permeability in low-resolution micro-CT images: A multiscale statistical approach
NASA Astrophysics Data System (ADS)
Botha, Pieter W. S. K.; Sheppard, Adrian P.
2016-06-01
We investigate the possibility of predicting permeability in low-resolution X-ray microcomputed tomography (µCT). Lower-resolution whole core images give greater sample coverage and are therefore more representative of heterogeneous systems; however, the lower resolution causes connecting pore throats to be represented by intermediate gray scale values and limits information on pore system geometry, rendering such images inadequate for direct permeability simulation. We present an imaging and computation workflow aimed at predicting absolute permeability for sample volumes that are too large to allow direct computation. The workflow involves computing permeability from high-resolution µCT images, along with a series of rock characteristics (notably open pore fraction, pore size, and formation factor) from spatially registered low-resolution images. Multiple linear regression models correlating permeability to rock characteristics provide a means of predicting and mapping permeability variations in larger scale low-resolution images. Results show excellent agreement between permeability predictions made from 16 and 64 µm/voxel images of 25 mm diameter 80 mm tall core samples of heterogeneous sandstone for which 5 µm/voxel resolution is required to compute permeability directly. The statistical model used at the lowest resolution of 64 µm/voxel (similar to typical whole core image resolutions) includes open pore fraction and formation factor as predictor characteristics. Although binarized images at this resolution do not completely capture the pore system, we infer that these characteristics implicitly contain information about the critical fluid flow pathways. Three-dimensional permeability mapping in larger-scale lower resolution images by means of statistical predictions provides input data for subsequent permeability upscaling and the computation of effective permeability at the core scale.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
NASA Astrophysics Data System (ADS)
Fang, N. Z.; Gao, S.
2015-12-01
Challenges of fully considering the complexity among spatially and temporally varied rainfall always exist in flood frequency analysis. Conventional approaches that simplify the complexity of spatiotemporal interactions generally undermine their impacts on flood risks. A previously developed stochastic storm generator called Dynamic Moving Storms (DMS) aims to address the highly-dependent nature of precipitation field: spatial variability, temporal variability, and movement of the storm. The authors utilize a multivariate statistical approach based on DMS to estimate the occurrence probability or frequency of extreme storm events. Fifteen years of radar rainfall data is used to generate a large number of synthetic storms as basis for statistical assessment. Two parametric retrieval algorithms are developed to recognize rain cells and track storm motions respectively. The resulted parameters are then used to establish probability density functions (PDFs), which are fitted to parametric distribution functions for further Monte Carlo simulations. Consequently, over 1,000,000 synthetic storms are generated based on twelve retrieved parameters for integrated risk assessment and ensemble forecasts. Furthermore, PDFs for parameters are used to calculate joint probabilities based on 2-dimensional Archimedean-Copula functions to determine the occurrence probabilities of extreme events. The approach is validated on the Upper Trinity River watershed and the generated results are compared with those from traditional rainfall frequency studies (i.e. Intensity-Duration-Frequency curves, and Areal Reduction Factors).
NASA Astrophysics Data System (ADS)
Seçgin, Abdullah
2013-01-01
Statistical energy analysis (SEA) parameters such as average modal spacing, coupling loss factor and input power are numerically determined for point connected, directly coupled symmetrically laminated composite plates using a modal-based approach. The approach is an enhancement of classical wave transmission formula. Unlike most of the existing numerical or experimental techniques, the approach uses uncoupled plate modal information and treats substructure by means of averaged modal impedances. The procedure introduced here is verified using analytical definitions of infinite orthotropic plates which physically resemble to laminated plates for (under) specific conditions, and is tested by performing experimental power injection method (PIM) for an actual, right-angled composite structure. In the development process, force and moment transmissions are individually considered in order to be consistent with analytical formulations. Modal information of composite plates is statistically evaluated by the discrete singular convolution method with random boundary conditions. Proposed methodology not only provides an efficient use of SEA method in high frequency vibration analysis of composite structures, but also enhances SEA accuracy in mid frequency region in which conventional SEA fails. Furthermore, the effect of orientation angles of laminations on SEA parameters are also discussed in mid and high frequency regions.
NASA Astrophysics Data System (ADS)
Panthou, G.; Vischel, T.; Lebel, T.; Quantin, G.; Favre, A.; Blanchet, J.; Ali, A.
2012-12-01
Studying trends in rainfall extremes at regional scale is required to provide reference climatology to evaluate General Circulation Model global predictions as well as to help managing and designing hydraulic works. The present study compares three methods to detect trends (linear and change-point) in series of daily rainfall annual maxima: (i) The first approach is widely used and consist in applying statistical stationarity tests (linear trend and change-point) on the point-wise maxima series; (ii) The second approach compares the performances of a constant and a time dependent Generalized Extreme Value (GEV) distribution fitted to the point-wise maxima series. (iii) The last method uses an original regional statistical model based on space-time GEV distribution which is used to detect changes in rainfall extremes directly at regional scale. The three methods are applied to detect trends in extreme daily rainfall over the Sahel during the period 1950-1990 for which a network of 128 daily rain gages is available. This region has experienced an intense drought since the end of the 1960s; it is thus an interesting case-study to illustrate how a regional climate change can affect the extreme rainfall distributions. One major result is that the statistical stationarity tests rarely detect non-stationarities in the series while the two GEV-based models converge to show that the extreme rainfall series have a negative break point around 1970. The study points out the limit of the widely used classical stationarity tests to detect trends in noisy series affected by sampling errors. The use of parametric time-dependent GEV seems to reduce this effect especially when a regional approach is used. From a climatological point of view, the results show that the great Sahelian drought has been accompanied by a decrease of extreme rainfall events, both in magnitude and occurence.
Prediction of free air space in initial composting mixtures by a statistical design approach.
Soares, Micaela A R; Quina, Margarida J; Quinta-Ferreira, Rosa
2013-10-15
Free air space (FAS) is a physical parameter that can play an important role in composting processes to maintain favourable aerobic conditions. Aiming to predict the FAS of initial composting mixtures, specific materials proportions ranged from 0 to 1 were tested for a case study comprising industrial potato peel, which is characterized by low air void volume, thus requiring additional components for its composting. The characterization and prediction of FAS for initial mixtures involving potato peel, grass clippings and rice husks (set A) or sawdust (set B) was accomplished by means of an augmented simplex-centroid mixture design approach. The experimental data were fitted to second order Scheffé polynomials. Synergistic or antagonistic effects of mixture proportions in the FAS response were identified from the surface and response trace plots in the FAS response. Moreover, a good agreement was achieved between the model predictions and supplementary experimental data. Moreover, theoretical and empirical approaches for estimating FAS available in literature were compared with the predictions generated by the mixture design approach. This study demonstrated that the mixture design methodology can be a valuable tool to predict the initial FAS of composting mixtures, specifically in making adjustments to improve composting processes containing primarily potato peel. PMID:23722176
Anazaw, K; Ohmori, L H
2001-11-01
Many hydrochemical studies on chemical formation of shallow ground water have been reported as results of water-rock interaction, and contamination of paleo-brine or human activities, whereas the preliminary formation of precipitation source in the recharged region has not been established yet. The purpose of this research work is to clarify the geochemical process of water formation from a water source unpolluted by seawater or human activity. Norikura volcano, located in western part of central Japan provided a suitable source for this research purpose, and hence chemical compositions of water samples from the summit and the mountainside area of Norikura volcano were determined. Most samples in the summit area showed very low electrical conductivity, and lower than 12 microS/cm. On the basis of the chemical compositions, principal component analysis (PCA) and factor analysis (FA), such as kinds of multivariate statistical techniques were used to extract geochemical factors affecting hydrochemical process. As a result, three factors were extracted. The first factor showed high loading on K+, Ca2+, SO2 and SiO2, and this factor was interpreted due to influence of the chemical interaction between acidic precipitated water and rocks. The second factor showed high loading on Na+ and Cl-, and it was assumed to be an influence of seawater salt. The third factor showed loading on NO3-, and it was interpreted to be caused by biochemical effect of vegetation. The proportionate contributions of these factors to the evolution of water chemical composition were found to be 45%, 20%, and 10% for factors 1, 2 and 3, respectively. The same exploration at the mountainside of Norikura volcano revealed that the chemical variances of the non-geothermal water samples were highly influenced by water-rock interactions. The silicate dissolution showed 45% contribution for all chemical variances, while the adsorption of Ca2+ and Mg2+ by precipitation or ion exchange showed 20
Comparison of a Traditional Probabilistic Risk Assessment Approach with Advanced Safety Analysis
Smith, Curtis L; Mandelli, Diego; Zhegang Ma
2014-11-01
As part of the Light Water Sustainability Program (LWRS) [1], the purpose of the Risk Informed Safety Margin Characterization (RISMC) [2] Pathway research and development (R&D) is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” (SBO) wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario. We also describe our approach we are using to represent advanced flooding analysis.
Spatio-statistical analysis of temperature fluctuation using Mann-Kendall and Sen's slope approach
NASA Astrophysics Data System (ADS)
Atta-ur-Rahman; Dawood, Muhammad
2016-04-01
This article deals with the spatio-statistical analysis of temperature trend using Mann-Kendall trend model (MKTM) and Sen's slope estimator (SSE) in the eastern Hindu Kush, north Pakistan. The climate change has a strong relationship with the trend in temperature and resultant changes in rainfall pattern and river discharge. In the present study, temperature is selected as a meteorological parameter for trend analysis and slope magnitude. In order to achieve objectives of the study, temperature data was collected from Pakistan Meteorological Department for all the seven meteorological stations that falls in the eastern Hindu Kush region. The temperature data were analysed and simulated using MKTM, whereas for the determination of temperature trend and slope magnitude SSE method have been applied to exhibit the type of fluctuations. The analysis reveals that a positive (increasing) trend in mean maximum temperature has been detected for Chitral, Dir and Saidu Sharif met stations, whereas, negative (decreasing) trend in mean minimum temperature has been recorded for met station Saidu Sharif and Timergara. The analysis further reveals that the concern variation in temperature trend and slope magnitude is attributed to climate change phenomenon in the region.
Mouser, Paula J; Rizzo, Donna M; Röling, Wilfred F M; Van Breukelen, Boris M
2005-10-01
Managers of landfill sites are faced with enormous challenges when attempting to detect and delineate leachate plumes with a limited number of monitoring wells, assess spatial and temporal trends for hundreds of contaminants, and design long-term monitoring (LTM) strategies. Subsurface microbial ecology is a unique source of data that has been historically underutilized in LTM groundwater designs. This paper provides a methodology for utilizing qualitative and quantitative information (specifically, multiple water quality measurements and genome-based data) from a landfill leachate contaminated aquifer in Banisveld, The Netherlands, to improve the estimation of parameters of concern. We used a principal component analysis (PCA) to reduce nonindependent hydrochemistry data, Bacteria and Archaea community profiles from 16S rDNA denaturing gradient gel electrophoresis (DGGE), into six statistically independent variables, representing the majority of the original dataset variances. The PCA scores grouped samples based on the degree or class of contamination and were similar over considerable horizontal distances. Incorporation of the principal component scores with traditional subsurface information using cokriging improved the understanding of the contaminated area by reducing error variances and increasing detection efficiency. Combining these multiple types of data (e.g., genome-based information, hydrochemistry, borings) may be extremely useful at landfill or other LTM sites for designing cost-effective strategies to detect and monitor contaminants. PMID:16245827
A statistical approach to estimate the LYAPUNOV spectrum in disc brake squeal
NASA Astrophysics Data System (ADS)
Oberst, S.; Lai, J. C. S.
2015-01-01
The estimation of squeal propensity of a brake system from the prediction of unstable vibration modes using the linear complex eigenvalue analysis (CEA) in the frequency domain has its fair share of successes and failures. While the CEA is almost standard practice for the automotive industry, time domain methods and the estimation of LYAPUNOV spectra have not received much attention in brake squeal analyses. One reason is the challenge in estimating the true LYAPUNOV exponents and their discrimination against spurious ones in experimental data. A novel method based on the application of the ECKMANN-RUELLE matrices is proposed here to estimate LYAPUNOV exponents by using noise in a statistical procedure. It is validated with respect to parameter variations and dimension estimates. By counting the number of non-overlapping confidence intervals for LYAPUNOV exponent distributions obtained by moving a window of increasing size over bootstrapped same-length estimates of an observation function, a dispersion measure's width is calculated and fed into a BAYESIAN beta-binomial model. Results obtained using this method for benchmark models of white and pink noise as well as the classical HENON map indicate that true LYAPUNOV exponents can be isolated from spurious ones with high confidence. The method is then applied to accelerometer and microphone data obtained from brake squeal tests. Estimated LYAPUNOV exponents indicate that the pad's out-of-plane vibration behaves quasi-periodically on the brink to chaos while the microphone's squeal signal remains periodic.
Zumberge, J.E.
1987-06-01
The distributions of eight tricyclic and eight pentacyclic terpanes were determined for 216 crude oils located worldwide with subsequent simultaneous RQ-mode factor analysis and stepwise discriminate analysis for the purpose of predicting source rock features or depositional environments. Five categories of source rock beds are evident: nearshore marine; deeper-water marine; lacustrine; phosphatic-rich source beds; and Ordovician age source rocks. The first two factors of the RQ-mode factor analysis describe 45 percent of the variation in the data set; the tricyclic terpanes appear to be twice as significant as pentacyclic terpanes in determining the variation among samples. Lacustrine oils are characterized by greater relative abundances of C/sub 21/ diterpane and gammacerane; nearshore marine sources by C/sub 19/ and C/sub 20/ diterpanes and oleanane; deeper-water marine facies by C/sub 24/ and C/sub 25/ tricyclic and C/sub 31/ plus C/sub 32/ extended hopanes; and Ordovician age oils by C/sub 27/ and C/sub 29/ pentacyclic terpanes. Although thermal maturity trends can be observed in factor space, the trends to do necessarily obscure the source rock interpretations. Also, since bacterial degradation of crude oils rarely affects tricyclic terpanes, biodegraded oils can be used in predicting source rock features. The precision to which source rock depositional environments are determined might be increased with the addition of other biomarker and stable isotope data using multivariate statistical techniques.
Methodologic and statistical approaches to studying human fertility and environmental exposure.
Tingen, Candace; Stanford, Joseph B; Dunson, David B
2004-01-01
Although there has been growing concern about the effects of environmental exposures on human fertility, standard epidemiologic study designs may not collect sufficient data to identify subtle effects while properly adjusting for confounding. In particular, results from conventional time to pregnancy studies can be driven by the many sources of bias inherent in these studies. By prospectively collecting detailed records of menstrual bleeding, occurrences of intercourse, and a marker of ovulation day in each menstrual cycle, precise information on exposure effects can be obtained, adjusting for many of the primary sources of bias. This article provides an overview of the different types of study designs, focusing on the data required, the practical advantages and disadvantages of each design, and the statistical methods required to take full advantage of the available data. We conclude that detailed prospective studies allowing inferences on day-specific probabilities of conception should be considered as the gold standard for studying the effects of environmental exposures on fertility. PMID:14698936
Solar Wind Turbulence and Intermittency at 0.72 AU - Statistical Approach
NASA Astrophysics Data System (ADS)
Teodorescu, E.; Echim, M.; Munteanu, C.; Zhang, T.; Barabash, S. V.; Budnik, E.; Fedorov, A.
2014-12-01
Through this analysis we characterize the turbulent magnetic fluctuations by Venus Express Magnetometer, VEX-MAG in the solar wind during the last solar cycle minimum at a distance of 0.72 AU from the Sun. We analyze data recorded between 2007 and 2009 with time resolutions of 1 Hz and 32 Hz. In correlation with plasma data from the ASPERA instrument, Analyser of Space Plasma and Energetic Atoms, we identify 550 time intervals, at 1 Hz resolution, when VEX is in the solar wind and which satisfy selection criteria defined based on the amount and the continuity of the data. We identify 118 time intervals that correspond to fast solar wind. We compute the power spectral densities (PSD) for Bx, By, Bz, B, B2, B|| and B^. We perform a statistical analysis of the spectral indices computed for each of the PSD's and evidence a dependence of the spectral index on the solar wind velocity and a slight difference in power content between parallel and perpendicular components of the magnetic field. We also estimate the scale invariance of fluctuations by computing the Probability Distribution Functions (PDFs) for Bx, By, Bz, B and B2 time series and discuss the implications for intermittent turbulence. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS - UEFISCDI, project number PN-II-ID-PCE-2012-4-0418.
A statistical approach to optimization of alumina etching in a high density plasma
Li Xiao; Gupta, Subhadra; Highsmith, Alton; Paranjpe, Ajit; Rook, Katrina
2008-08-01
Inductively coupled plasma (ICP) reactive ion etching of Al{sub 2}O{sub 3} with fluorine-based gas chemistry in a high density plasma reactor was carried out in an initial investigation aimed at data storage applications. A statistical design of experiments was implemented to optimize etch performance with respect to process variables such as ICP power, platen power, direct current (dc) bias, and pressure. Both soft photoresist masks and hard metal masks were investigated in terms of etch selectivity and surface properties. The reverse power dependence of dc bias on the ratio of ICP to platen power was elucidated. Etch mechanisms in terms of physical and ion enhanced chemical etchings were discussed. The F-based chemistry greatly enhances the etch rate of alumina compared to purely physical processes such as ion milling. Etch rates as high as 150 nm/min were achieved using this process. A practical process window was developed for high etch rates, with reasonable selectivity to hard masks, with the desired profile, and with low substrate bias for minimal damage.
NASA Astrophysics Data System (ADS)
Um, Myoung-Jin; Kim, Hanbeen; Heo, Jun-Haeng
2016-08-01
A general circulation model (GCM) can be applied to project future climate factors, such as precipitation and atmospheric temperature, to study hydrological and environmental climate change. Although many improvements in GCMs have been proposed recently, projected climate data are still required to be corrected for the biases in generating data before applying the model to practical applications. In this study, a new hybrid process was proposed, and its ability to perform bias correction for the prediction of annual precipitation and annual daily maxima, was tested. The hybrid process in this study was based on quantile mapping with the gamma and generalized extreme value (GEV) distributions and a spline technique to correct the bias of projected daily precipitation. The observed and projected daily precipitation values from the selected stations were analyzed using three bias correction methods, namely, linear scaling, quantile mapping, and hybrid methods. The performances of these methods were analyzed to find the optimal method for prediction of annual precipitation and annual daily maxima. The linear scaling method yielded the best results for estimating the annual average precipitation, while the hybrid method was optimal for predicting the variation in annual precipitation. The hybrid method described the statistical characteristics of the annual maximum series (AMS) similarly to the observed data. In addition, this method demonstrated the lowest root mean squared error (RMSE) and the highest coefficient of determination (R2) for predicting the quantiles of the AMS for the extreme value analysis of precipitation.
Turi, Christina E; Finley, Jamie; Shipley, Paul R; Murch, Susan J; Brown, Paula N
2015-04-24
Metabolomics is the qualitative and quantitative analysis of all of the small molecules in a biological sample at a specific time and influence. Technologies for metabolomics analysis have developed rapidly as new analytical tools for chemical separations, mass spectrometry, and NMR spectroscopy have emerged. Plants have one of the largest metabolomes, and it is estimated that the average plant leaf can contain upward of 30 000 phytochemicals. In the past decade, over 1200 papers on plant metabolomics have been published. A standard metabolomics data set contains vast amounts of information and can either investigate or generate hypotheses. The key factors in using plant metabolomics data most effectively are the experimental design, authentic standard availability, extract standardization, and statistical analysis. Using cranberry (Vaccinium macrocarpon) as a model system, this review will discuss and demonstrate strategies and tools for analysis and interpretation of metabolomics data sets including eliminating false discoveries and determining significance, metabolite clustering, and logical algorithms for discovery of new metabolites and pathways. Together these metabolomics tools represent an entirely new pipeline for phytochemical discovery. PMID:25751407
Relationship between skin color and sun exposure history: a statistical classification approach.
Rubegni, P; Cevenini, G; Flori, M L; Fimiani, M; Stanghellini, E; Molinu, A; Barbini, P; Andreassi, L
1997-02-01
In this study our aim was to determine the biophysical values of constitutive skin color in Caucasians and to define the correlation between skin color and phototype assessed according to the Fitzpatrick method. Constitutive skin color was measured on the buttock, with a Minolta CR-200 colorimeter, in a population-of 557 consecutive subjects belonging to phototype categories I, II, III and IV. The colorimeter expresses the results in five different color systems. We used the "Yxy" and L*a*b* systems, which are the most widespread in dermatology. Statistical analysis of the data showed that the "Yxy" system is even more discriminant than the L*a*b* system when the Fitzpatrick classification scheme is adopted as the reference and shows a poor ability to correctly classify the intermediate phototypes (II and III). On the contrary the "Yxy" system performs well in distinguishing phototypes I and IV. To establish whether this low discriminating capacity for phototypes II and III is related to a low discriminating capacity of the method suggested by Fitzpatrick or by our procedure, an objective technique (minimal erythemal dose) should be used to evaluate the percentage errors of classification of both the Fitzpatrick method and instrumental measurement of skin color. The results of such a study are extremely important because the evaluation of skin color is objective, simple and has potential applications in dermatology and cosmetology. PMID:9066310
A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit
2014-09-01
When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.
Akanksha, Karthik; Prasad, Arjun; Sukumaran, Rajeev K; Nampoothiri, Madhavan; Pandey, Ashok; Rao, S S; Parameswaran, Binod
2014-11-01
Sorghum is one of the commercially feasible lignocellulosic biomass and has a great potential of being sustainable feedstock for renewable energy. As with any lignocellulosic biomass, sorghum also requires pretreatment which increases its susceptibility to hydrolysis by enzymes for generating sugars which can be further fermented to alcohol. In the present study, sorghum biomass was evaluated for deriving maximum fermentable sugars by optimizing various pretreatment parameters using statistical optimization methods. Pretreatment studies were done with H2SO4, followed by enzymatic saccharification. The efficiency of the process was evaluated on the basis of production of the total reducing sugars released during the process. Compositional analysis was done for native as well as pretreated biomass and compared. The biomass pretreated with the optimized conditions could yield 0.408 g of reducing sugars /g of pretreated biomass upon enzymatic hydrolysis. The cellulose content in the solid portion obtained after pretreatment using optimised conditions was found to be increased by 43.37% with lesser production of inhibitors in acid pretreated liquor. PMID:25434103
A statistical approach to determining criticality of residual host cell DNA.
Yang, Harry; Wei, Ziping; Schenerman, Mark
2015-01-01
We propose a method for determining the criticality of residual host cell DNA, which is characterized through two attributes, namely the size and amount of residual DNA in biopharmaceutical product. By applying a mechanistic modeling approach to the problem, we establish the linkage between residual DNA and product safety measured in terms of immunogenicity, oncogenicity, and infectivity. Such a link makes it possible to establish acceptable ranges of residual DNA size and amount. Application of the method is illustrated through two real-life examples related to a vaccine manufactured in Madin Darby Canine Kidney cell line and a monoclonal antibody using Chinese hamster ovary (CHO) cell line as host cells. PMID:25358029
A statistical modelling approach for the analysis of TMD chronic pain data.
D'Elia, Angela
2008-08-01
This paper presents a discrete mixture model as a suitable approach for the analysis of chronic pain data, when they are expressed by means of ordered scores (ratings). The model is developed to allow for covariates effects; parameters estimation by maximum likelihood (using an E-M algorithm) and related inferential issues are discussed. A case-study concerning the assessment of pain (for a given pathology) and the effect of patients' covariates (e.g., gender, depression state) is illustrated. Besides, the same covariates are considered also for explaining the disruption in everyday lifestyle due to the chronic pain condition. PMID:17698929
Febvre, G.
1994-10-01
The problem of the lidar equation inversion lies in the fact that it requires a lidar calibration or else a reference value from the studied medium. This paper presents an approach to calibrate the lidar by calculating the constant Ak (lidar constant A multiplied by the ratio of backscatter coefficient to extinction coefficient k). This approach is based on statistical analysis of in situ measurements. This analysis demonstrates that the extinction coefficient has a typical probablility distribution in cirrus clouds. The property of this distribution, as far as the attenuation of laser beam in the cloud, is used as a constraint to calculate the value of Ak. The validity of this method is discussed and results compared with two other inversion methods.
Snyder, Hannah R; Miyake, Akira; Hankin, Benjamin L
2015-01-01
Executive function (EF) is essential for successfully navigating nearly all of our daily activities. Of critical importance for clinical psychological science, EF impairments are associated with most forms of psychopathology. However, despite the proliferation of research on EF in clinical populations, with notable exceptions clinical and cognitive approaches to EF have remained largely independent, leading to failures to apply theoretical and methodological advances in one field to the other field and hindering progress. First, we review the current state of knowledge of EF impairments associated with psychopathology and limitations to the previous research in light of recent advances in understanding and measuring EF. Next, we offer concrete suggestions for improving EF assessment. Last, we suggest future directions, including integrating modern models of EF with state of the art, hierarchical models of dimensional psychopathology as well as translational implications of EF-informed research on clinical science. PMID:25859234
Application of the LBB regulatory approach to the steamlines of advanced WWER 1000 reactor
Kiselyov, V.A.; Sokov, L.M.
1997-04-01
The LBB regulatory approach adopted in Russia in 1993 as an extra safety barrier is described for advanced WWER 1000 reactor steamline. The application of LBB concept requires the following additional protections. First, the steamline should be a highly qualified piping, performed in accordance with the applicable regulations and guidelines, carefully screened to verify that it is not subjected to any disqualifying failure mechanism. Second, a deterministic fracture mechanics analysis and leak rate evaluation have been performed to demonstrate that postulated through-wall crack that yields 95 1/min at normal operation conditions is stable even under seismic loads. Finally, it has been verified that the leak detection systems are sufficiently reliable, diverse and sensitive, and that adequate margins exist to detect a through wall crack smaller than the critical size. The obtained results are encouraging and show the possibility of the application of the LBB case to the steamline of advanced WWER 1000 reactor.
Snyder, Hannah R.; Miyake, Akira; Hankin, Benjamin L.
2015-01-01
Executive function (EF) is essential for successfully navigating nearly all of our daily activities. Of critical importance for clinical psychological science, EF impairments are associated with most forms of psychopathology. However, despite the proliferation of research on EF in clinical populations, with notable exceptions clinical and cognitive approaches to EF have remained largely independent, leading to failures to apply theoretical and methodological advances in one field to the other field and hindering progress. First, we review the current state of knowledge of EF impairments associated with psychopathology and limitations to the previous research in light of recent advances in understanding and measuring EF. Next, we offer concrete suggestions for improving EF assessment. Last, we suggest future directions, including integrating modern models of EF with state of the art, hierarchical models of dimensional psychopathology as well as translational implications of EF-informed research on clinical science. PMID:25859234
Sleepwalking Into Infertility: The Need for a Public Health Approach Toward Advanced Maternal Age.
Lemoine, Marie-Eve; Ravitsky, Vardit
2015-01-01
In Western countries today, a growing number of women delay motherhood until their late 30s and even 40s, as they invest time in pursuing education and career goals before starting a family. This social trend results from greater gender equality and expanded opportunities for women and is influenced by the availability of contraception and assisted reproductive technologies (ART). However, advanced maternal age is associated with increased health risks, including infertility. While individual medical solutions such as ART and elective egg freezing can promote reproductive autonomy, they entail significant risks and limitations. We thus argue that women should be better informed regarding the risks of advanced maternal age and ART, and that these individual solutions need to be supplemented by a public health approach, including policy measures that provide women with the opportunity to start a family earlier in life without sacrificing personal career goals. PMID:26575814
Du, Tao; Duan, Yu; Li, Kaiwen; Zhao, Xiaomiao; Ni, Renmin; Li, Yu; Yang, Dongzi
2015-01-01
Background. Single-nucleotide polymorphisms (SNPs) in the follicle stimulating hormone receptor (FSHR) gene are associated with PCOS. However, their relationship to the polycystic ovary (PCO) morphology remains unknown. This study aimed to investigate whether PCOS related SNPs in the FSHR gene are associated with PCO in women with PCOS. Methods. Patients were grouped into PCO (n = 384) and non-PCO (n = 63) groups. Genomic genotypes were profiled using Affymetrix human genome SNP chip 6. Two polymorphisms (rs2268361 and rs2349415) of FSHR were analyzed using a statistical approach. Results. Significant differences were found in the allele distributions of the GG genotype of rs2268361 between the PCO and non-PCO groups (27.6% GG, 53.4% GA, and 19.0% AA versus 33.3% GG, 36.5% GA, and 30.2% AA), while no significant differences were found in the allele distributions of the GG genotype of rs2349415. When rs2268361 was considered, there were statistically significant differences of serum follicle stimulating hormone, estradiol, and sex hormone binding globulin between genotypes in the PCO group. In case of the rs2349415 SNP, only serum sex hormone binding globulin was statistically different between genotypes in the PCO group. Conclusions. Functional variants in FSHR gene may contribute to PCO susceptibility in women with PCOS. PMID:26273622
NASA Astrophysics Data System (ADS)
Makino, Hironori; Minami, Nariyuki
2014-07-01
The theory of the quantal level statistics of a classically integrable system, developed by Makino et al. in order to investigate the non-Poissonian behaviors of level-spacing distribution (LSD) and level-number variance (LNV) [H. Makino and S. Tasaki, Phys. Rev. E 67, 066205 (2003); H. Makino and S. Tasaki, Prog. Theor. Phys. Suppl. 150, 376 (2003); H. Makino, N. Minami, and S. Tasaki, Phys. Rev. E 79, 036201 (2009); H. Makino and S. Tasaki, Prog. Theor. Phys. 114, 929 (2005)], is successfully extended to the study of the E(K,L) function, which constitutes a fundamental measure to determine most statistical observables of quantal levels in addition to LSD and LNV. In the theory of Makino et al., the eigenenergy level is regarded as a superposition of infinitely many components whose formation is supported by the Berry-Robnik approach in the far semiclassical limit [M. Robnik, Nonlinear Phenom. Complex Syst. 1, 1 (1998)]. We derive the limiting E(K,L) function in the limit of infinitely many components and elucidate its properties when energy levels show deviations from the Poisson statistics.
NASA Astrophysics Data System (ADS)
Capar, M. Ilk; Nar, A.; Ferrarini, A.; Frezza, E.; Greco, C.; Zakharov, A. V.; Vakulenko, A. A.
2013-03-01
The connection between the molecular structure of liquid crystals and their elastic properties, which control the director deformations relevant for electro-optic applications, remains a challenging objective for theories and computations. Here, we compare two methods that have been proposed to this purpose, both characterized by a detailed molecular level description. One is an integrated molecular dynamics-statistical mechanical approach, where the bulk elastic constants of nematics are calculated from the direct correlation function (DCFs) and the single molecule orientational distribution function [D. A. McQuarrie, Statistical Mechanics (Harper & Row, New York, 1973)]. The latter is obtained from atomistic molecular dynamics trajectories, together with the radial distribution function, from which the DCF is then determined by solving the Ornstein-Zernike equation. The other approach is based on a molecular field theory, where the potential of mean torque experienced by a mesogen in the liquid crystal phase is parameterized according to its molecular surface. In this case, the calculation of elastic constants is combined with the Monte Carlo sampling of single molecule conformations. Using these different approaches, but the same description, at the level of molecular geometry and torsional potentials, we have investigated the elastic properties of the nematic phase of two typical mesogens, 4'-n-pentyloxy-4-cyanobiphenyl and 4'-n-heptyloxy-4-cyanobiphenyl. Both methods yield K3(bend) >K1 (splay) >K2 (twist), although there are some discrepancies in the average elastic constants and in their anisotropy. These are interpreted in terms of the different approximations and the different ways of accounting for the structural properties of molecules in the two approaches. In general, the results point to the role of the molecular shape, which is modulated by the conformational freedom and cannot be fully accounted for by a single descriptor such as the aspect ratio.
Ilk Capar, M; Nar, A; Ferrarini, A; Frezza, E; Greco, C; Zakharov, A V; Vakulenko, A A
2013-03-21
The connection between the molecular structure of liquid crystals and their elastic properties, which control the director deformations relevant for electro-optic applications, remains a challenging objective for theories and computations. Here, we compare two methods that have been proposed to this purpose, both characterized by a detailed molecular level description. One is an integrated molecular dynamics-statistical mechanical approach, where the bulk elastic constants of nematics are calculated from the direct correlation function (DCFs) and the single molecule orientational distribution function [D. A. McQuarrie, Statistical Mechanics (Harper & Row, New York, 1973)]. The latter is obtained from atomistic molecular dynamics trajectories, together with the radial distribution function, from which the DCF is then determined by solving the Ornstein-Zernike equation. The other approach is based on a molecular field theory, where the potential of mean torque experienced by a mesogen in the liquid crystal phase is parameterized according to its molecular surface. In this case, the calculation of elastic constants is combined with the Monte Carlo sampling of single molecule conformations. Using these different approaches, but the same description, at the level of molecular geometry and torsional potentials, we have investigated the elastic properties of the nematic phase of two typical mesogens, 4'-n-pentyloxy-4-cyanobiphenyl and 4'-n-heptyloxy-4-cyanobiphenyl. Both methods yield K3(bend) >K1 (splay) >K2 (twist), although there are some discrepancies in the average elastic constants and in their anisotropy. These are interpreted in terms of the different approximations and the different ways of accounting for the structural properties of molecules in the two approaches. In general, the results point to the role of the molecular shape, which is modulated by the conformational freedom and cannot be fully accounted for by a single descriptor such as the aspect ratio
Bindu, K R; Deepulal, P M; Gireeshkumar, T R; Chandramohanakumar, N
2015-08-01
Temporal and spatial variations of heavy metals in the Cochin estuary and its adjacent coastline during three seasons were studied to investigate the impact of anthropogenic heavy metal pollution. Total organic carbon, sand, silt, clay and 10 metals (Cd, Pb, Cr, Ni, Co, Cu, Zn, Mn, Mg and Fe) in the surface sediments were analysed. Multivariate statistical analyses like canonical correspondence analysis, principal component analysis and cluster analysis were used for source identification, integration of geochemical data and clustering of stations based on similarities. Enrichment factor, contamination factor and geoaccumulation index were used to assess the contamination level. From the study, it can be understood that estuary and coast are highly polluted especially with Cd, Zn, Pb and Ni. Anthropogenic influence of heavy metals was evidenced from both the principal component analysis and cluster analysis. Finer fractions (mud) of the sediment and the associated Fe oxy hydroxides might be playing major role in the transport of heavy metals in the system. Very high enrichment factor value observed suggested high anthropogenic pressure in the study area. All the stations in the northern part of the estuary showed very high enrichment factors indicating heavy load of Zn and Cd in this area which might have reached from the industrial area lying to the north side of the Cochin estuary. Pollution indices suggested that both the estuary and its adjacent coast were showing low contamination with respect to Cr, Mg, Mn and Fe; all other metals were causing low to extremely high levels of pollution in the study area. PMID:26205283
Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.
García-Mozo, H; Yaezel, L; Oteros, J; Galán, C
2014-03-01
Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into
Identification of chilling and heat requirements of cherry trees—a statistical approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Kunz, Achim; Blanke, Michael M.
2013-09-01
Most trees from temperate climates require the accumulation of winter chill and subsequent heat during their dormant phase to resume growth and initiate flowering in the following spring. Global warming could reduce chill and hence hamper the cultivation of high-chill species such as cherries. Yet determining chilling and heat requirements requires large-scale controlled-forcing experiments, and estimates are thus often unavailable. Where long-term phenology datasets exist, partial least squares (PLS) regression can be used as an alternative, to determine climatic requirements statistically. Bloom dates of cherry cv. `Schneiders späte Knorpelkirsche' trees in Klein-Altendorf, Germany, from 24 growing seasons were correlated with 11-day running means of daily mean temperature. Based on the output of the PLS regression, five candidate chilling periods ranging in length from 17 to 102 days, and one forcing phase of 66 days were delineated. Among three common chill models used to quantify chill, the Dynamic Model showed the lowest variation in chill, indicating that it may be more accurate than the Utah and Chilling Hours Models. Based on the longest candidate chilling phase with the earliest starting date, cv. `Schneiders späte Knorpelkirsche' cherries at Bonn exhibited a chilling requirement of 68.6 ± 5.7 chill portions (or 1,375 ± 178 chilling hours or 1,410 ± 238 Utah chill units) and a heat requirement of 3,473 ± 1,236 growing degree hours. Closer investigation of the distinct chilling phases detected by PLS regression could contribute to our understanding of dormancy processes and thus help fruit and nut growers identify suitable tree cultivars for a future in which static climatic conditions can no longer be assumed. All procedures used in this study were bundled in an R package (`chillR') and are provided as Supplementary materials. The procedure was also applied to leaf emergence dates of walnut (cv. `Payne') at Davis, California.
Garanin, S. F.; Kravets, E. M.; Mamyshev, V. I.; Tokarev, V. A.
2009-08-15
Radiation spectra from a plasma with multicharged ions, z >> N >> 1(where z is the charge of an ion and N is the number of electrons in the ion) under coronal equilibrium conditions are considered in the quasiclassical approximation. In this case, the bremsstrahlung and recombination radiation can be described by simple quasiclassical formulas. The statistical model of an atom is used to study the high-frequency component of the line radiation spectra from ions ({h_bar}{omega} > I, where I is the ionization energy) that is produced in collisions of free plasma electrons with the electrons at deep levels of an ion and during radiative filling of the forming hole by electrons from higher levels (X-ray terms, characteristic radiation). The intensity of this high-frequency spectral component of the characteristic radiation coincides in order of magnitude with the bremsstrahlung and recombination radiation intensities. One of the channels of collisions of free electrons with a multicharged ion is considered that results in the excitation of the ion and in its subsequent radiative relaxation, which contributes to the low-frequency component of the line spectrum ({h_bar}{omega} < I). The total radiation intensity of this channel correlates fairly well with the results of calculating the radiation intensity from the multilevel coronal model. An analysis of the plasma behavior in the MAGO-IX experiment by two-dimensional MHD numerical simulations and a description of the experimental data from a DANTE spectrometer by the spectra obtained in this study shows that these experimental results cannot be explained if the D-T plasma is assumed to remain pure in the course of experiment. The agreement can be made better, how-ever, by assuming that the plasma is contaminated with impurities of copper and light elements from the wall.
A statistical approach to determining energetic outer radiation belt electron precipitation fluxes
NASA Astrophysics Data System (ADS)
Simon Wedlund, Mea; Clilverd, Mark A.; Rodger, Craig J.; Cresswell-Moorcock, Kathy; Cobbett, Neil; Breen, Paul; Danskin, Donald; Spanswick, Emma; Rodriguez, Juan V.
2014-05-01
Subionospheric radio wave data from an Antarctic-Arctic Radiation-Belt (Dynamic) Deposition VLF Atmospheric Research Konsortia (AARDDVARK) receiver located in Churchill, Canada, is analyzed to determine the characteristics of electron precipitation into the atmosphere over the range 3 < L < 7. The study advances previous work by combining signals from two U.S. transmitters from 20 July to 20 August 2010, allowing error estimates of derived electron precipitation fluxes to be calculated, including the application of time-varying electron energy spectral gradients. Electron precipitation observations from the NOAA POES satellites and a ground-based riometer provide intercomparison and context for the AARDDVARK measurements. AARDDVARK radiowave propagation data showed responses suggesting energetic electron precipitation from the outer radiation belt starting 27 July 2010 and lasting ~20 days. The uncertainty in >30 keV precipitation flux determined by the AARDDVARK technique was found to be ±10%. Peak >30 keV precipitation fluxes of AARDDVARK-derived precipitation flux during the main and recovery phase of the largest geomagnetic storm, which started on 4 August 2010, were >105 el cm-2 s-1 sr-1. The largest fluxes observed by AARDDVARK occurred on the dayside and were delayed by several days from the start of the geomagnetic disturbance. During the main phase of the disturbances, nightside fluxes were dominant. Significant differences in flux estimates between POES, AARDDVARK, and the riometer were found after the main phase of the largest disturbance, with evidence provided to suggest that >700 keV electron precipitation was occurring. Currently the presence of such relativistic electron precipitation introduces some uncertainty in the analysis of AARDDVARK data, given the assumption of a power law electron precipitation spectrum.
Forecast of natural aquifer discharge using a data-driven, statistical approach.
Boggs, Kevin G; Van Kirk, Rob; Johnson, Gary S; Fairley, Jerry P
2014-01-01
In the Western United States, demand for water is often out of balance with limited water supplies. This has led to extensive water rights conflict and litigation. A tool that can reliably forecast natural aquifer discharge months ahead of peak water demand could help water practitioners and managers by providing advanced knowledge of potential water-right mitigation requirements. The timing and magnitude of natural aquifer discharge from the Eastern Snake Plain Aquifer (ESPA) in southern Idaho is accurately forecast 4 months ahead of the peak water demand, which occurs annually in July. An ARIMA time-series model with exogenous predictors (ARIMAX model) was used to develop the forecast. The ARIMAX model fit to a set of training data was assessed using Akaike's information criterion to select the optimal model that forecasts aquifer discharge, given the previous year's discharge and values of the predictor variables. Model performance was assessed by application of the model to a validation subset of data. The Nash-Sutcliffe efficiency for model predictions made on the validation set was 0.57. The predictor variables used in our forecast represent the major recharge and discharge components of the ESPA water budget, including variables that reflect overall water supply and important aspects of water administration and management. Coefficients of variation on the regression coefficients for streamflow and irrigation diversions were all much less than 0.5, indicating that these variables are strong predictors. The model with the highest AIC weight included streamflow, two irrigation diversion variables, and storage. PMID:24571388
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar
2016-02-01
The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders
NASA Astrophysics Data System (ADS)
Hein, H.; Mai, S.; Mayer, B.; Pohlmann, T.; Barjenbruch, U.
2012-04-01
The interactions of tides, external surges, storm surges and waves with an additional role of the coastal bathymetry define the probability of extreme water levels at the coast. Probabilistic analysis and also process based numerical models allow the estimation of future states. From the physical point of view both, deterministic processes and stochastic residuals are the fundamentals of high water statistics. This study uses a so called model chain to reproduce historic statistics of tidal high water levels (Thw) as well as the prediction of future statistics high water levels. The results of the numerical models are post-processed by a stochastic analysis. Recent studies show, that for future extrapolation of extreme Thw nonstationary parametric approaches are required. With the presented methods a better prediction of time depended parameter sets seems possible. The investigation region of this study is the southern German Bright. The model-chain is the representation of a downscaling process, which starts with an emissions scenario. Regional atmospheric and ocean models refine the results of global climate models. The concept of downscaling was chosen to resolve coastal topography sufficiently. The North Sea and estuaries are modeled with the three-dimensional model HAMburg Shelf Ocean Model. The running time includes 150 years (1950 - 2100). Results of four different hindcast runs and also of one future prediction run are validated. Based on multi-scale analysis and the theory of entropy we analyze whether any significant periodicities are represented numerically. Results show that also hindcasting the climate of Thw with a model chain for the last 60 years is a challenging task. For example, an additional modeling activity must be the inclusion of tides into regional climate ocean models. It is found that the statistics of climate variables derived from model results differs from the statistics derived from measurements. E.g. there are considerable shifts in
NASA Astrophysics Data System (ADS)
Hannequin, Pascal Paul
2015-06-01
Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images. Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable. The proposed filters have been applied to nuclear
Hannequin, Pascal Paul
2015-06-01
Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images.Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable.The proposed filters have been applied to nuclear
Bilancetti, Luca; Poncelet, Denis; Loisel, Catherine; Mazzitelli, Stefania; Nastruzzi, Claudio
2010-09-01
This article describes the preparation of starch particles, by spray drying, for possible application to a dry powder coating process. Dry powder coating consists of spraying a fine powder and a plasticizer on particles. The efficiency of the coating is linked to the powder morphological and dimensional characteristics. Different experimental parameters of the spray-drying process were analyzed, including type of solvent, starch concentration, rate of polymer feeding, pressure of the atomizing air, drying air flow, and temperature of drying air. An optimization and screening of the experimental parameters by a design of the experiment (DOE) approach have been done. Finally, the produced spray-dried starch particles were conveniently tested in a dry coating process, in comparison to the commercial initial starch. The obtained results, in terms of coating efficiency, demonstrated that the spray-dried particles led to a sharp increase of coating efficiency value. PMID:20706878
Statistical dynamics of classical systems: A self-consistent field approach
Grzetic, Douglas J. Wickham, Robert A.; Shi, An-Chang
2014-06-28
We develop a self-consistent field theory for particle dynamics by extremizing the functional integral representation of a microscopic Langevin equation with respect to the collective fields. Although our approach is general, here we formulate it in the context of polymer dynamics to highlight satisfying formal analogies with equilibrium self-consistent field theory. An exact treatment of the dynamics of a single chain in a mean force field emerges naturally via a functional Smoluchowski equation, while the time-dependent monomer density and mean force field are determined self-consistently. As a simple initial demonstration of the theory, leaving an application to polymer dynamics for future work, we examine the dynamics of trapped interacting Brownian particles. For binary particle mixtures, we observe the kinetics of phase separation.
Arostegui, Inmaculada; Núñez-Antón, Vicente; Quintana, José M
2012-04-01
Patient-reported outcomes (PRO) are used as primary endpoints in medical research and their statistical analysis is an important methodological issue. Theoretical assumptions of the selected methodology and interpretation of its results are issues to take into account when selecting an appropriate statistical technique to analyse data. We present eight methods of analysis of a popular PRO tool under different assumptions that lead to different interpretations of the results. All methods were applied to responses obtained from two of the health dimensions of the SF-36 Health Survey. The proposed methods are: multiple linear regression (MLR), with least square and bootstrap estimations, tobit regression, ordinal logistic and probit regressions, beta-binomial regression (BBR), binomial-logit-normal regression (BLNR) and coarsening. Selection of an appropriate model depends not only on its distributional assumptions but also on the continuous or ordinal features of the response and the fact that they are constrained to a bounded interval. The BBR approach renders satisfactory results in a broad number of situations. MLR is not recommended, especially with skewed outcomes. Ordinal methods are only appropriate for outcomes with a few number of categories. Tobit regression is an acceptable option under normality assumptions and in the presence of moderate ceiling or floor effect. The BLNR and coarsening proposals are also acceptable, but only under certain distributional assumptions that are difficult to test a priori. Interpretation of the results is more convenient when using the BBR, BLNR and ordinal logistic regression approaches. PMID:20858689
Matson, Kevin D.; Tieleman, B. Irene
2011-01-01
The immune system is a complex collection of interrelated and overlapping solutions to the problem of disease. To deal with this complexity, researchers have devised multiple ways to measure immune function and to analyze the resulting data. In this way both organisms and researchers employ many tactics to solve a complex problem. One challenge facing ecological immunologists is the question of how these many dimensions of immune function can be synthesized to facilitate meaningful interpretations and conclusions. We tackle this challenge by employing and comparing several statistical methods, which we used to test assumptions about how multiple aspects of immune function are related at different organizational levels. We analyzed three distinct datasets that characterized 1) species, 2) subspecies, and 3) among- and within-individual level differences in the relationships among multiple immune indices. Specifically, we used common principal components analysis (CPCA) and two simpler approaches, pair-wise correlations and correlation circles. We also provide a simple example of how these techniques could be used to analyze data from multiple studies. Our findings lead to several general conclusions. First, relationships among indices of immune function may be consistent among some organizational groups (e.g. months over the annual cycle) but not others (e.g. species); therefore any assumption of consistency requires testing before further analyses. Second, simple statistical techniques used in conjunction with more complex multivariate methods give a clearer and more robust picture of immune function than using complex statistics alone. Moreover, these simpler approaches have potential for analyzing comparable data from multiple studies, especially as the field of ecological immunology moves towards greater methodological standardization. PMID:21526186
ERIC Educational Resources Information Center
Averitt, Sallie D.
These three modules, which were developed for use by instructors in a manufacturing firm's advanced technical preparation program, contain the materials required to present the safety section of the plant's adult-oriented, job-specific competency-based training program. The 3 modules contain 12 lessons on the following topics: lockout/tagout…
Po River plume and Northern Adriatic Dense Waters: a modeling and statistical approach.
NASA Astrophysics Data System (ADS)
Marcello Falcieri, Francesco; Benetazzo, Alvise; Sclavo, Mauro; Carniel, Sandro; Bergamasco, Andrea; Bonaldo, Davide; Barbariol, Francesco; Russo, Aniello
2014-05-01
The semi enclosed Adriatic Sea, located in the North-Eastern part of the Mediterranean Sea, is a small regional sea strongly influenced by riverine inputs. In its northern shallow sub-basin both the physical and biogeochemical features are strongly influenced by the Po River (together with some other minor ones) through its freshwater plume, by buoyancy changes and nutrients and sediments loads. The major outcomes of this interaction are on primary production, on the rising of hypoxic and anoxic bottom water conditions, on the formation of strong salinity gradients (that influence the water column structure and both coastal and basinwide circulation) and on the formation processes of the Northern Adriatic Dense Water (NAdDW). The NAdDW is a dense water mass that is formed during winter in the shallow Northern Adriatic under buoyancy loss conditions; it then travels southwardly along the Italian coasts reaching the Southern Adriatic after a few months. The NAdDW formation process is mostly locally wind driven but it has been proved that freshwater discharges play an important preconditioning role, starting since the summer previous to the formation period. To investigate the relationship between the Po plume (as a preconditioning factor) and the subsequent dense water formation, the results obtained by a numerical simulation with the Regional Ocean Modelling System (ROMS) have been statistically analyzed. The model has been implemented over the whole basin with a 2 km regular grid, and surface fluxes computed through a bulk fluxes formulation using an high resolution meteorological model (COSMO I7). The only open boundary (the Otranto Strait) is imposed from an operational Mediterranean model (MFS) and main rivers discharges are introduced as a freshwater mass fluxes measured by river gauges closest to the rivers' mouths. The model was run for 8 years, from 2003 to 2010. The Po plume was analysed with a 2x3 Self-Organizing Map (SOM) and two major antithetic patterns
Identification of chilling and heat requirements of cherry trees--a statistical approach.
Luedeling, Eike; Kunz, Achim; Blanke, Michael M
2013-09-01
Most trees from temperate climates require the accumulation of winter chill and subsequent heat during their dormant phase to resume growth and initiate flowering in the following spring. Global warming could reduce chill and hence hamper the cultivation of high-chill species such as cherries. Yet determining chilling and heat requirements requires large-scale controlled-forcing experiments, and estimates are thus often unavailable. Where long-term phenology datasets exist, partial least squares (PLS) regression can be used as an alternative, to determine climatic requirements statistically. Bloom dates of cherry cv. 'Schneiders späte Knorpelkirsche' trees in Klein-Altendorf, Germany, from 24 growing seasons were correlated with 11-day running means of daily mean temperature. Based on the output of the PLS regression, five candidate chilling periods ranging in length from 17 to 102 days, and one forcing phase of 66 days were delineated. Among three common chill models used to quantify chill, the Dynamic Model showed the lowest variation in chill, indicating that it may be more accurate than the Utah and Chilling Hours Models. Based on the longest candidate chilling phase with the earliest starting date, cv. 'Schneiders späte Knorpelkirsche' cherries at Bonn exhibited a chilling requirement of 68.6 ± 5.7 chill portions (or 1,375 ± 178 chilling hours or 1,410 ± 238 Utah chill units) and a heat requirement of 3,473 ± 1,236 growing degree hours. Closer investigation of the distinct chilling phases detected by PLS regression could contribute to our understanding of dormancy processes and thus help fruit and nut growers identify suitable tree cultivars for a future in which static climatic conditions can no longer be assumed. All procedures used in this study were bundled in an R package ('chillR') and are provided as Supplementary materials. The procedure was also applied to leaf emergence dates of walnut (cv. 'Payne') at Davis, California. PMID
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization
A Novel Approach to Materials Development for Advanced Reactor Systems. Annual Report for Year 1
Was, G.S.; Atzmon, M.; Wang, L.
2000-09-28
OAK B188 A Novel Approach to Materials Development for Advanced Reactor Systems. Annual Report for Year 1 Year one of this project had three major goals. First, to specify, order and install a new high current ion source for more rapid and stable proton irradiation. Second, to assess the use of chromium pre-enrichment and the combination of cold-work and irradiation hardening in an effort to assess the role of radiation damage in IASCC without the effects of RIS. Third, to initiate irradiation of reactor pressure vessel steel and Zircaloy. Program Achievements for Year One: Progress was made on all 4 tasks in year one.
A statistical approach to multifield inflation: many-field perturbations beyond slow roll
NASA Astrophysics Data System (ADS)
McAllister, Liam; Renaux-Petel, Sébastien; Xu, Gang
2012-10-01
We study multifield contributions to the scalar power spectrum in an ensemble of six-field inflationary models obtained in string theory. We identify examples in which inflation occurs by chance, near an approximate inflection point, and we compute the primordial perturbations numerically, both exactly and using an array of truncated models. The scalar mass spectrum and the number of fluctuating fields are accurately described by a simple random matrix model. During the approach to the inflection point, bending trajectories and violations of slow roll are commonplace, and `many-field' effects, in which three or more fields influence the perturbations, are often important. However, in a large fraction of models consistent with constraints on the tilt the signatures of multifield evolution occur on unobservably large scales. Our scenario is a concrete microphysical realization of quasi-single-field inflation, with scalar masses of order H, but the cubic and quartic couplings are typically too small to produce detectable non-Gaussianity. We argue that our results are characteristic of a broader class of models arising from multifield potentials that are natural in the Wilsonian sense.
A statistical approach to multifield inflation: many-field perturbations beyond slow roll
McAllister, Liam; Xu, Gang; Renaux-Petel, Sébastien E-mail: S.Renauxpetel@damtp.cam.ac.uk
2012-10-01
We study multifield contributions to the scalar power spectrum in an ensemble of six-field inflationary models obtained in string theory. We identify examples in which inflation occurs by chance, near an approximate inflection point, and we compute the primordial perturbations numerically, both exactly and using an array of truncated models. The scalar mass spectrum and the number of fluctuating fields are accurately described by a simple random matrix model. During the approach to the inflection point, bending trajectories and violations of slow roll are commonplace, and 'many-field' effects, in which three or more fields influence the perturbations, are often important. However, in a large fraction of models consistent with constraints on the tilt the signatures of multifield evolution occur on unobservably large scales. Our scenario is a concrete microphysical realization of quasi-single-field inflation, with scalar masses of order H, but the cubic and quartic couplings are typically too small to produce detectable non-Gaussianity. We argue that our results are characteristic of a broader class of models arising from multifield potentials that are natural in the Wilsonian sense.
A statistical approach to the thermal analysis at fumarole fields using infrared images
NASA Astrophysics Data System (ADS)
Pisciotta, Antonino; Diliberto, Iole Serena
2016-04-01
exchange of energy drives each component towards thermal equilibrium. Infrared cameras allow thermal anomalies to be spotted in an instant, but in order to correctly interpret the thermal images great caution should be paid, since retrieved apparent temperatures are affected by a number of factors including emissivity and surface roughness of the object, viewing angle, atmospheric effects, pathlength, effects of sun radiation (reflection and/or heating), presence of volcanic gas, aerosols and air-borne ash along the pathlength, instrumental noise and aberrations, and, particularly for volcanic targets, thermal heterogeneity of the target at the sub-pixel scale. The sum of these influences substantially control the radiation detected by the thermal camera, generally resulting in a significant underestimation of the actual thermodynamic temperature of the target. A statistical methodology was chosen to quantify the thermal anomalies in a steaming ground and it could provide a basis for an indirect temperature monitoring tool in fumarole fields.
Chiu, Weihsueh A.; Euling, Susan Y.; Scott, Cheryl Siegel; Subramaniam, Ravi P.
2013-09-15
The contribution of genomics and associated technologies to human health risk assessment for environmental chemicals has focused largely on elucidating mechanisms of toxicity, as discussed in other articles in this issue. However, there is interest in moving beyond hazard characterization to making more direct impacts on quantitative risk assessment (QRA) — i.e., the determination of toxicity values for setting exposure standards and cleanup values. We propose that the evolution of QRA of environmental chemicals in the post-genomic era will involve three, somewhat overlapping phases in which different types of approaches begin to mature. The initial focus (in Phase I) has been and continues to be on “augmentation” of weight of evidence — using genomic and related technologies qualitatively to increase the confidence in and scientific basis of the results of QRA. Efforts aimed towards “integration” of these data with traditional animal-based approaches, in particular quantitative predictors, or surrogates, for the in vivo toxicity data to which they have been anchored are just beginning to be explored now (in Phase II). In parallel, there is a recognized need for “expansion” of the use of established biomarkers of susceptibility or risk of human diseases and disorders for QRA, particularly for addressing the issues of cumulative assessment and population risk. Ultimately (in Phase III), substantial further advances could be realized by the development of novel molecular and pathway-based biomarkers and statistical and in silico models that build on anticipated progress in understanding the pathways of human diseases and disorders. Such efforts would facilitate a gradual “reorientation” of QRA towards approaches that more directly link environmental exposures to human outcomes.
NASA Astrophysics Data System (ADS)
French, Ben; Clarke, K. Robert; Platell, Margaret E.; Potter, Ian C.
2013-07-01
Many food webs are so complex that it is difficult to distinguish the relationships between predators and their prey. We have therefore developed an approach that produces a food web which clearly demonstrates the strengths of the relationships between the predator guilds of demersal fish and their prey guilds in a coastal ecosystem. Subjecting volumetric dietary data for 35 abundant predators along the lower western Australia coast to cluster analysis and the SIMPROF routine separated the various species × length class combinations into 14 discrete predator guilds. Following nMDS ordination, the sequence of points for these predator guilds represented a 'trophic' hierarchy. This demonstrated that, with increasing body size, several species progressed upwards through this hierarchy, reflecting a marked change in diet, whereas others remained within the same guild. A novel use of cluster analysis and SIMPROF then identified each group of prey that was ingested in a common pattern across the full suite of predator guilds. This produced 12 discrete groups of taxa (prey guilds) that each typically comprised similar ecological/functional prey, which were then also aligned in a hierarchy. The hierarchical arrangements of the predator and prey guilds were plotted against each other to show the percentage contribution of each prey guild to the diet of each predator guild. The resultant shade plot demonstrates quantitatively how food resources are spread among the fish species and revealed that two prey guilds, one containing cephalopods and teleosts and the other small benthic/epibenthic crustaceans and polychaetes, were consumed by all predator guilds.
Antonucci, Francesca; Costa, Corrado; Aguzzi, Jacopo; Cataudella, Stefano
2009-07-01
In many fish species, morphological similarity can be considered as a proxy for similarities in habitat use. The Sparidae family includes species that are recognized for common morphological features such as structure and positioning of the fins and specialized dentition. The aim of this study was to quantitatively describe the relationship of body shape morphology with habitat use, trophic level, and systematics in the majority of known Sparidae species (N = 92). This ecomorphological comparison was performed with a geometric morphometric approach considering as variables the Trophic Index (TROPH), the habitat (i.e., classified as demersal, benthopelagic and reef associated) and the phylogenetic relationship of species at the subfamily level. The analysis by the TROPH variable showed a positive relation with shape because the morphological features of all the species are strongly correlated with their trophic behavior (e.g., herbivore species have a smaller mouth gap that make them able to feed upon sessile resources). The morphological analysis according to the Habitat variable was used to classify species according to a feeding-habitat niche in terms of portion of the water column and seabed space where species mostly perform their behavioral activities. We described three kinds of morphological designs in relation to a benthopelagic, demersal and reef-associated habit. The six subfamily groups were morphologically well distinguishable and the cladogram relative to Mahalanobis' morphological distances was compared with those proposed by other authors. We also quantified the phylogenetic relationship among the different subfamilies based on the analysis of shape in relation to trophic ecology, confirming the observations of the authors. PMID:19180528
NASA Technical Reports Server (NTRS)
Riha, Andrew P.
2005-01-01
As humans and robotic technologies are deployed in future constellation systems, differing traffic services will arise, e.g., realtime and non-realtime. In order to provide a quality of service framework that would allow humans and robotic technologies to interoperate over a wide and dynamic range of interactions, a method of classifying data as realtime or non-realtime is needed. In our paper, we present an approach that leverages the Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) data link protocol. Specifically, we redefine the AOS Transfer Frame Replay Flag in order to provide an automated store-and-forward approach on a per-service basis for use in the next-generation Interplanetary Network. In addition to addressing the problem of intermittent connectivity and associated services, we propose a follow-on methodology for prioritizing data through further modification of the AOS Transfer Frame.
Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach
NASA Technical Reports Server (NTRS)
Chan, William M.; Kim, Noah; Pandya, Shishir A.
2012-01-01
Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.
An Ontological-Fuzzy Approach to Advance Reservation in Multi-Cluster Grids
NASA Astrophysics Data System (ADS)
Ferreira, D. J.; Dantas, M. A. R.; Bauer, Michael A.
2010-11-01
Advance reservation is an important mechanism for a successful utilization of available resources in distributed multi-cluster environments. This mechanism allows, for example, a user to provide parameters aiming to satisfy requirements related to applications' execution time and temporal dependence. This predictability can lead the system to reach higher levels of QoS. However, the support for advance reservation has been restricted due to the complexity of large scale configurations and also dynamic changes verified in these systems. In this research work it is proposed an advance reservation method, based on a ontology-fuzzy approach. It allows a user to reserve a wide variety of resources and enable large jobs to be reserved among different nodes. In addition, it dynamically verifies the possibility of reservation with the local RMS, avoiding future allocation conflicts. Experimental results of the proposal, through simulation, indicate that the proposed mechanism reached a successful level of flexibility for large jobs and more appropriated distribution of resources in a distributed multi-cluster configuration.
Fixing the system, not the women: an innovative approach to faculty advancement.
Morrissey, Claudia S; Schmidt, Mary Lou
2008-10-01
Women in academic medicine are approaching parity without power. Although the number of women choosing careers in medicine has grown substantially over the last 35 years, there has not been a commensurate increase in the percentage of women in senior leadership positions. To redress this situation at the University of Illinois College of Medicine (UICM), the Faculty Academic Advancement Committee (FAAC) was established in January 2003. FAAC's long-term goals are to create an institution whose faculty, department leaders, and deans reflect the gender and ethnic profile of the college's student body and to enable excellence in research, teaching, and patient care while promoting work/life balance. Commissioned as a Dean's Committee, FAAC brings together a diverse group of faculty and academic professionals from inside and outside the college to learn, reflect, and act. FAAC has committed to increasing the percentage of tenured women faculty and advancing women into leadership positions by carrying out an ambitious evidence-based institutional transformation effort. FAAC's initiatives-data gathering, constituency building, department transformation, policy reform, and advocacy-have helped to create an enabling environment for change at UICM. This case study outlines the history, conceptual approach, structure, initiatives, and initial outcomes of FAAC's efforts. PMID:18771391
Zajdlik, Barry Alan
2016-04-01
The species sensitivity distribution (SSD) distribution approach to estimating water quality guidelines (WQGs) is the preferred method in all jurisdictions reviewed (Australia, Canada, New Zealand, Organisation for Economic Co-operation and Development [OECD] members, South Africa, United States) and is one of the recommended methods for European Commission members for 33 priority and priority hazardous substances. In the event that jurisdiction-specific criteria for data quality, quantity, and taxonomic representation are not met, all of these jurisdictions endorse the use of additional safety factors (SFs) applied to either the SSD-based WQG or, the lowest suitable toxicity test endpoint. In Canada, the British Columbia Ministry of Environment endorses this latter approach as the preferred approach in the belief that so-derived WQGs are more protective than SSD-based WQGs. The level of protection afforded by the latter SF approach was evaluated by statistically sampling minima from random samples of the following distributions: normal, Gumbel, logistic, and Weibull, using a range of coefficients of variation (cVs) and applying the SFs of 2 or 10 used in British Columbia. The simulations indicate that the potentially affected fraction of species (PAF) can be as high as 20%, or, approach 0%. The PAF varies with sample size and CV. Because CVs can vary systematically with mode of toxic action, the PAF using SF-based WQGs can also vary systematically with analyte class. The varying levels of protection afforded by SF-based WQGs are generally inconsistent with the common water quality management goal that allows for a small degree of change under long-term exposure. The findings suggest that further efforts be made to develop high-quality WQGs that support informed decision making and are consistent with the environmental management goal instead of using SFs in the hope of achieving an acceptable but unknown, degree of environmental protection. PMID:26272692
NASA Astrophysics Data System (ADS)
Trofimchuk, O.; Kaliukh, Iu.
2012-04-01
More than 90% of the territory of Ukraine has complex ground conditions. Unpredictable changes of natural geological and man-made factors governing ground conditions, may lead to dangerous deformation processes resulting in accidents and disasters. Among them, landslides are the first by the amount of the inflicted damage in Ukraine and the second only to earthquakes in the world. Totally about 23 000 landslides were identified in the territory of Ukraine. The standard deterministic procedure of assessment of the slope stability, especially with the lack of reference engineering geological data, results in obtaining estimated values of stability coefficients differing from the real ones in many cases. Application of a probabilistic approach will allow to take into account the changeable properties of soils and to determine danger and risk of landslide dislocations. The matter of choice of landslide protection measures is directly connected with a risk: expensively but reliably or cheaper but with a great probability of accidents. The risk determines the consequences either economic, social or others, of a potential landslide dislocation on the slope both during construction of a retaining structure on it and in the process of its further maintenance. The quintessence of risk determination consists in the following: study and extrapolation of the past events for each specific occurrence. Expected conclusions and probable damages as a result of a calculated and accepted risk can be determined only with a certain level of uncertainty. Considering this fact improvement of the accuracy of numerical and analytical estimates when calculating the risk magnitude makes it possible to reduce the uncertainty. Calculations of the Chernivtsi shear landslides (Ukraine) were made with an application of Plaxis software and due account of a risk of its displacement was performed for the typical distribution diagram of the landslide-prone slope. The calculations showed that seismic
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Miller, Robert A.
2003-01-01
The development of low conductivity, robust thermal and environmental barrier coatings requires advanced testing techniques that can accurately and effectively evaluate coating thermal conductivity and cyclic resistance at very high surface temperatures (up to 1700 C) under large thermal gradients. In this study, a laser high-heat-flux test approach is established for evaluating advanced low conductivity, high temperature capability thermal and environmental barrier coatings under the NASA Ultra Efficient Engine Technology (UEET) program. The test approach emphasizes the real-time monitoring and assessment of the coating thermal conductivity, which initially rises under the steady-state high temperature thermal gradient test due to coating sintering, and later drops under the cyclic thermal gradient test due to coating cracking/delamination. The coating system is then evaluated based on damage accumulation and failure after the combined steady-state and cyclic thermal gradient tests. The lattice and radiation thermal conductivity of advanced ceramic coatings can also be evaluated using laser heat-flux techniques. The external radiation resistance of the coating is assessed based on the measured specimen temperature response under a laser- heated intense radiation-flux source. The coating internal radiation contribution is investigated based on the measured apparent coating conductivity increases with the coating surface test temperature under large thermal gradient test conditions. Since an increased radiation contribution is observed at these very high surface test temperatures, by varying the laser heat-flux and coating average test temperature, the complex relation between the lattice and radiation conductivity as a function of surface and interface test temperature may be derived.
Advanced materials synthesis at the nano and macro scale: An electrochemical approach
NASA Astrophysics Data System (ADS)
Arvin, Charles Leon
There are many environmentally demanding and specific applications which require synthesis of advanced materials which are either difficult to make or extremely expensive on a large scale using standard methods such as integrated circuit fabrication. These applications can range from the need to modify the surface properties of an alloy in order to inhibit corrosion processes to reducing the size of a particular metal or semiconductor in order to confine electrons, which occurs as length scale is reduced to between 1--20 nm. There are a multitude of tools, techniques and processing steps that can be utilized to synthesize these materials. Electrochemical techniques offer an inexpensive method that utilizes the large installed manufacturing base to modify the surface of materials and to produce materials with necessary sizes. A methodology to electrochemically produce advanced materials was followed that (1) identified applications where advanced materials were necessary, (2) identified electrochemical techniques that could produce those materials with appropriate templates providing the necessary control over size and geometry, (3) developed a general framework and/or simple one-dimensional model to understand what factors were necessary to control or manipulate in order to produce an advanced material with the proper material performance and (4) finally, these materials were synthesized and evaluated using electrochemical and surface analysis techniques. The versatility of this approach was shown through four applications that included (1) elimination or minimization of the environmentally hazardous Cr(III)/Cr(VI) redox couple from conversion coating formulations, (2) electrophoretic synthesis of ordered nano-arrays from colloidal materials for use as sensors, (3) synthesis of two- and three-dimensional electrodes for fuel cell applications, and (4) development of a process to produce semiconductor wires for improvements in photovoltaic devices and infrared
NASA Astrophysics Data System (ADS)
Ovidiu Vlad, Marcel; Schönfisch, Birgitt
1996-08-01
A mean-field approach for epidemic processes with high migration is suggested by analogy with non-equilibrium statistical mechanics. For large systems a limit of the thermodynamic type is introduced for which both the total size of the system and the total number of individuals tend to infinity but the population density remains constant. In the thermodynamic limit the infection rate is proportional to the product of the proportion of individuals susceptible to infection and the average probability of infection. The limit form of the average probability of infection is insensitive to the detailed behaviour of the fluctuations of the number of infectious individuals and may belong to two universality classes: (1) if the fluctuation of the number of infectives is non-intermittent it increases with the increase of the partial density of infectives and approaches exponentially the asymptotic value one for large densities; (2) for intermittent fluctuations obeying a power-law scaling the average probability of infection also displays a saturation effect for large densities of infectives but the asymptotic value one is approached according to a power law rather than exponentially. For low densities of infectives both expressions for the average probability of infection are linear functions of the proportion of infectives and the infection rate is given by the mass-action law.
Bilgraer, Raphaël; Gillet, Sylvie; Gil, Sophie; Evain-Brion, Danièle; Laprévote, Olivier
2014-11-01
While acting upon chromatin compaction, histone post-translational modifications (PTMs) are involved in modulating gene expression through histone-DNA affinity and protein-protein interactions. These dynamic and environment-sensitive modifications are constitutive of the histone code that reflects the transient transcriptional state of the chromatin. Here we describe a global screening approach for revealing epigenetic disruption at the histone level. This original approach enables fast and reliable relative abundance comparison of histone PTMs and variants in human cells within a single LC-MS experiment. As a proof of concept, we exposed BeWo human choriocarcinoma cells to sodium butyrate (SB), a universal histone deacetylase (HDAC) inhibitor. Histone acid-extracts (n = 45) equally representing 3 distinct classes, Control, 1 mM and 2.5 mM SB, were analysed using ultra-performance liquid chromatography coupled with a hybrid quadrupole time-of-flight mass spectrometer (UPLC-QTOF-MS). Multivariate statistics allowed us to discriminate control from treated samples based on differences in their mass spectral profiles. Several acetylated and methylated forms of core histones emerged as markers of sodium butyrate treatment. Indeed, this untargeted histonomic approach could be a useful exploratory tool in many cases of xenobiotic exposure when histone code disruption is suspected. PMID:25167371
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Wu, S.; Schwaerz, M.; Fritzer, J.; Zhang, S.; Carter, B. A.; Zhang, K.
2013-12-01
Navigation Satellite System (GNSS)-based radio occultation (RO) is a satellite remote sensing technique providing accurate profiles of the Earth's atmosphere for weather and climate applications. Above about 30 km altitude, however, statistical optimization is a critical process for initializing the RO bending angles in order to optimize the climate monitoring utility of the retrieved atmospheric profiles. Here we introduce an advanced dynamic statistical optimization algorithm, which uses bending angles from multiple days of European Centre for Medium-range Weather Forecasts (ECMWF) short-range forecast and analysis fields, together with averaged-observed bending angles, to obtain background profiles and associated error covariance matrices with geographically varying background uncertainty estimates on a daily updated basis. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.4 (OPSv5.4) algorithm, using several days of simulated MetOp and observed CHAMP and COSMIC data, for January and July conditions. We find the following for the new method's performance compared to OPSv5.4: 1.) it significantly reduces random errors (standard deviations), down to about half their size, and leaves less or about equal residual systematic errors (biases) in the optimized bending angles; 2.) the dynamic (daily) estimate of the background error correlation matrix alone already improves the optimized bending angles; 3.) the subsequently retrieved refractivity profiles and atmospheric (temperature) profiles benefit by improved error characteristics, especially above about 30 km. Based on these encouraging results, we work to employ similar dynamic error covariance estimation also for the observed bending angles and to apply the method to full months and subsequently to entire climate data records.
A statistical approach for validating eSOTER and digital soil maps in front of traditional soil maps
NASA Astrophysics Data System (ADS)
Bock, Michael; Baritz, Rainer; Köthe, Rüdiger; Melms, Stephan; Günther, Susann
2015-04-01
During the European research project eSOTER, three different Digital Soil Maps (DSM) were developed for the pilot area Chemnitz 1:250,000 (FP7 eSOTER project, grant agreement nr. 211578). The core task of the project was to revise the SOTER method for the interpretation of soil and terrain data. It was one of the working hypothesis that eSOTER does not only provide terrain data with typical soil profiles, but that the new products actually perform like a conceptual soil map. The three eSOTER maps for the pilot area considerably differed in spatial representation and content of soil classes. In this study we compare the three eSOTER maps against existing reconnaissance soil maps keeping in mind that traditional soil maps have many subjective issues and intended bias regarding the overestimation and emphasize of certain features. Hence, a true validation of the proper representation of modeled soil maps is hardly possible; rather a statistical comparison between modeled and empirical approaches is possible. If eSOTER data represent conceptual soil maps, then different eSOTER, DSM and conventional maps from various sources and different regions could be harmonized towards consistent new data sets for large areas including the whole European continent. One of the eSOTER maps has been developed closely to the traditional SOTER method: terrain classification data (derived from SRTM DEM) were combined with lithology data (re-interpreted geological map); the corresponding terrain units were then extended with soil information: a very dense regional soil profile data set was used to define soil mapping units based on a statistical grouping of terrain units. The second map is a pure DSM map using continuous terrain parameters instead of terrain classification; radiospectrometric data were used to supplement parent material information from geology maps. The classification method Random Forest was used. The third approach predicts soil diagnostic properties based on
NASA Astrophysics Data System (ADS)
Panagopoulos, George P.
2014-10-01
The multivariate statistical techniques conducted on quarterly water consumption data in Mytilene reveal valuable tools that could help the local authorities in assigning strategies aimed at the sustainable development of urban water resources. The proposed methodology is an innovative approach, applied for the first time in the international literature, to handling urban water consumption data in order to analyze statistically the interrelationships among the determinants of urban water use. Factor analysis of demographic, socio-economic and hydrological variables shows that total water consumption in Mytilene is the combined result of increases in (a) income, (b) population, (c) connections and (d) climate parameters. On the other hand, the per connection water demand is influenced by variations in water prices but with different consequences in each consumption class. Increases in water prices are faced by large consumers; they then reduce their consumption rates and transfer to lower consumption blocks. These shifts are responsible for the increase in the average consumption values in the lower blocks despite the increase in the marginal prices.
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Ndembo Longo, Jean; Vanclooster, Marnik
2016-03-01
A multivariate statistical modelling approach was applied to explain the anthropogenic pressure of nitrate pollution on the Kinshasa groundwater body (Democratic Republic of Congo). Multiple regression and regression tree models were compared and used to identify major environmental factors that control the groundwater nitrate concentration in this region. The analyses were made in terms of physical attributes related to the topography, land use, geology and hydrogeology in the capture zone of different groundwater sampling stations. For the nitrate data, groundwater datasets from two different surveys were used. The statistical models identified the topography, the residential area, the service land (cemetery), and the surface-water land-use classes as major factors explaining nitrate occurrence in the groundwater. Also, groundwater nitrate pollution depends not on one single factor but on the combined influence of factors representing nitrogen loading sources and aquifer susceptibility characteristics. The groundwater nitrate pressure was better predicted with the regression tree model than with the multiple regression model. Furthermore, the results elucidated the sensitivity of the model performance towards the method of delineation of the capture zones. For pollution modelling at the monitoring points, therefore, it is better to identify capture-zone shapes based on a conceptual hydrogeological model rather than to adopt arbitrary circular capture zones.
NASA Astrophysics Data System (ADS)
Chen, Y. Z.; Zhang, Yong-Li
1997-06-01
Equilibrium binding constants of the anticancer drug daunomycin, bound to several GC containing polymeric DNAs (G represent guanine and C cytosine), are calculated by means of a microscopic statistical mechanics approach and based on observed x-ray crystal structures. Our calculation shows base sequence specificity of daunomycin in agreement with the observations. We find the drug binding constant to be sensitive to the base composition of the host sequence. The binding stability decreases in the order of CGTACG, CGATCG, and CGGCCG, which is consistent with observations (T represents thymine and A adenine). This binding specificity arises from sequence specific hydrogen bond and nonbonded interactions between the drug and a host DNA. These interactions are affected by sequence specific structural features exhibited from x-ray crystallography. The agreement between our calculations and experiments shows that our method is of practical application in analyzing sequence specific binding stability of anticancer drugs.
Adams, Michael C; Barbano, David M
2015-06-01
Our objective was to develop a statistical approach that could be used to determine whether a handler's fat, protein, or other solids mid-infrared (MIR) spectrophotometer test values were different, on average, from a milk regulatory laboratory's MIR test values when split-sampling test values are not available. To accomplish this objective, the Proc GLM procedure of SAS (SAS Institute Inc., Cary, NC) was used to develop a multiple linear regression model to evaluate 4 mo of MIR producer payment testing data (112 to 167 producers per month) from 2 different MIR instruments. For each of the 4 mo and each of the 2 components (fat or protein), the GLM model was Response=Instrument+Producer+Date+2-Way Interactions+3-Way Interaction. Instrument was significant in determining fat and protein tests for 3 of the 4 mo, and Producer was significant in determining fat and protein tests for all 4 mo. This model was also used to establish fat and protein least significant differences (LSD) between instruments. Fat LSD between instruments ranged from 0.0108 to 0.0144% (α=0.05) for the 4 mo studied, whereas protein LSD between instruments ranged from 0.0046 to 0.0085% (α=0.05). In addition, regression analysis was used to determine the effects of component concentration and date of sampling on fat and protein differences between 2 MIR instruments. This statistical approach could be performed monthly to document a regulatory laboratory's verification that a given handler's instrument has obtained a different test result, on average, from that of the regulatory laboratory's and that an adjustment to producer payment may be required. PMID:25828652
Moya, Claudio E; Raiber, Matthias; Taulis, Mauricio; Cox, Malcolm E
2015-03-01
The Galilee and Eromanga basins are sub-basins of the Great Artesian Basin (GAB). In this study, a multivariate statistical approach (hierarchical cluster analysis, principal component analysis and factor analysis) is carried out to identify hydrochemical patterns and assess the processes that control hydrochemical evolution within key aquifers of the GAB in these basins. The results of the hydrochemical assessment are integrated into a 3D geological model (previously developed) to support the analysis of spatial patterns of hydrochemistry, and to identify the hydrochemical and hydrological processes that control hydrochemical variability. In this area of the GAB, the hydrochemical evolution of groundwater is dominated by evapotranspiration near the recharge area resulting in a dominance of the Na-Cl water types. This is shown conceptually using two selected cross-sections which represent discrete groundwater flow paths from the recharge areas to the deeper parts of the basins. With increasing distance from the recharge area, a shift towards a dominance of carbonate (e.g. Na-HCO3 water type) has been observed. The assessment of hydrochemical changes along groundwater flow paths highlights how aquifers are separated in some areas, and how mixing between groundwater from different aquifers occurs elsewhere controlled by geological structures, including between GAB aquifers and coal bearing strata of the Galilee Basin. The results of this study suggest that distinct hydrochemical differences can be observed within the previously defined Early Cretaceous-Jurassic aquifer sequence of the GAB. A revision of the two previously recognised hydrochemical sequences is being proposed, resulting in three hydrochemical sequences based on systematic differences in hydrochemistry, salinity and dominant hydrochemical processes. The integrated approach presented in this study which combines different complementary multivariate statistical techniques with a detailed assessment of the