Goedert, Kelly M.; Boston, Raymond C.; Barrett, A. M.
2013-01-01
Valid research on neglect rehabilitation demands a statistical approach commensurate with the characteristics of neglect rehabilitation data: neglect arises from impairment in distinct brain networks leading to large between-subject variability in baseline symptoms and recovery trajectories. Studies enrolling medically ill, disabled patients, may suffer from missing, unbalanced data, and small sample sizes. Finally, assessment of rehabilitation requires a description of continuous recovery trajectories. Unfortunately, the statistical method currently employed in most studies of neglect treatment [repeated measures analysis of variance (ANOVA), rANOVA] does not well-address these issues. Here we review an alternative, mixed linear modeling (MLM), that is more appropriate for assessing change over time. MLM better accounts for between-subject heterogeneity in baseline neglect severity and in recovery trajectory. MLM does not require complete or balanced data, nor does it make strict assumptions regarding the data structure. Furthermore, because MLM better models between-subject heterogeneity it often results in increased power to observe treatment effects with smaller samples. After reviewing current practices in the field, and the assumptions of rANOVA, we provide an introduction to MLM. We review its assumptions, uses, advantages, and disadvantages. Using real and simulated data, we illustrate how MLM may improve the ability to detect effects of treatment over ANOVA, particularly with the small samples typical of neglect research. Furthermore, our simulation analyses result in recommendations for the design of future rehabilitation studies. Because between-subject heterogeneity is one important reason why studies of neglect treatments often yield conflicting results, employing statistical procedures that model this heterogeneity more accurately will increase the efficiency of our efforts to find treatments to improve the lives of individuals with neglect. PMID
2012-01-01
assume that the NSMS can be approximated by a series of expansion functions F m ( ) such that ( ) m F m ( ) m1 M (31) UXO...a receiver coil is the electromotive force given by the negative of the time derivative of the secondary magnetic flux through the coil. Since the...statistical signal processing MM-1572 Final Report Sky Research, Inc. January 2012 52 A support vector machine learns from data: when fed a series
Wild, M.; Rouhani, S.
1995-02-01
A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate ore concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.
Intermediate/Advanced Research Design and Statistics
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2009-01-01
The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Deterministic and Advanced Statistical Modeling of Wind-Driven Sea
2015-07-06
COVERED (From - To) 01/09/2010-06/07/2015 4. TITLE AND SUBTITLE Deterministic and advanced statistical modeling of wind-driven sea 5a. CONTRACT...Technical Report Deterministic and advanced statistical modeling of wind-driven sea Vladimir Zakharov, Andrei Pushkarev Waves and Solitons LLC, 1719 W...Development of accurate and fast advanced statistical and dynamical nonlinear models of ocean surface waves, based on first physical principles, which will
Writing to Learn Statistics in an Advanced Placement Statistics Course
ERIC Educational Resources Information Center
Northrup, Christian Glenn
2012-01-01
This study investigated the use of writing in a statistics classroom to learn if writing provided a rich description of problem-solving processes of students as they solved problems. Through analysis of 329 written samples provided by students, it was determined that writing provided a rich description of problem-solving processes and enabled…
Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.
ERIC Educational Resources Information Center
Halavin, James; Sommer, Charles
Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…
Enhanced bio-manufacturing through advanced multivariate statistical technologies.
Martin, E B; Morris, A J
2002-11-13
The paper describes the interrogation of data, from a reaction vessel producing an active pharmaceutical ingredient (API), using advanced multivariate statistical techniques. Due to the limited number of batches available, data augmentation was used to increase the number of batches thereby enabling the extraction of more subtle process behaviour from the data. A second methodology investigated was that of multi-group modelling. This allowed between cluster variability to be removed, thus allowing attention to focus on within process variability. The paper describes how the different approaches enabled the realisation of a better understanding of the factors causing the onset of an impurity formation to be obtained as well demonstrating the power of multivariate statistical data analysis techniques to provide an enhanced understanding of the process.
Conceptualizing a Framework for Advanced Placement Statistics Teaching Knowledge
ERIC Educational Resources Information Center
Haines, Brenna
2015-01-01
The purpose of this article is to sketch a conceptualization of a framework for Advanced Placement (AP) Statistics Teaching Knowledge. Recent research continues to problematize the lack of knowledge and preparation among secondary level statistics teachers. The College Board's AP Statistics course continues to grow and gain popularity, but is a…
Advanced Algorithms and Statistics for MOS Surveys
NASA Astrophysics Data System (ADS)
Bolton, A. S.
2016-10-01
This paper presents an individual view on the current state of computational data processing and statistics for inference and discovery in multi-object spectroscopic surveys, supplemented by a historical perspective and a few present-day applications. It is more op-ed than review, and hopefully more readable as a result.
Advance Report of Final Mortality Statistics, 1985.
ERIC Educational Resources Information Center
Monthly Vital Statistics Report, 1987
1987-01-01
This document presents mortality statistics for 1985 for the entire United States. Data analysis and discussion of these factors is included: death and death rates; death rates by age, sex, and race; expectation of life at birth and at specified ages; causes of death; infant mortality; and maternal mortality. Highlights reported include: (1) the…
Simulating Fibre Suspensions: Lagrangian versus Statistical Approach
NASA Astrophysics Data System (ADS)
Zhao, L. H.; Andersson, H. I.; Gillissen, J. J. J.; Boersma, B. J.
Fibre suspensions exhibit complex dynamical flow phenomena and are at the same time of immense practical importance, notably in the pulp and paper industries. NTNU and TU Delft have in a collaborative research project adopted two alternative strategies in the simulation of dilute fibre suspensions, namely a statistical approach [2] and a Lagrangian particle treatment [4]. The two approaches have their own advantages and disadvantages. In this paper we aim for the first time to compare the performance of the two.
Reconciling statistical and systems science approaches to public health.
Ip, Edward H; Rahmandad, Hazhir; Shoham, David A; Hammond, Ross; Huang, Terry T-K; Wang, Youfa; Mabry, Patricia L
2013-10-01
Although systems science has emerged as a set of innovative approaches to study complex phenomena, many topically focused researchers including clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which the researchers are accustomed. There also appears to be conflicts between complex systems approaches and traditional statistical methodologies, both in terms of their underlying strategies and the languages they use. We argue that the conflicts are resolvable, and the sooner the better for the field. In this article, we show how statistical and systems science approaches can be reconciled, and how together they can advance solutions to complex problems. We do this by comparing the methods within a theoretical framework based on the work of population biologist Richard Levins. We present different types of models as representing different tradeoffs among the four desiderata of generality, realism, fit, and precision.
A Review of Graphical Approaches to Common Statistical Analyses
Coman, Emil N.; Suggs, L. Suzanne; Coman, Maria A.; Iordache, Eugen; Fifield, Judith
2015-01-01
We provide a comprehensive review of simple and advanced statistical analyses using an intuitive visual approach explicitly modeling Latent Variables (LV). This method can better illuminate what is assumed in each analytical method and what is actually estimated, by translating the causal relationships embedded in the graphical models in equation form. We recommend the graphical display rooted in the century old path analysis, that details all parameters of each statistical model, and suggest labeling that clarifies what is given vs. what is estimated. We link in the process classical and modern analyses under the encompassing broader umbrella of Generalized Latent Variable Modeling, and demonstrate that LVs are omnipresent in all statistical approaches, yet until directly ‘seeing’ them in visual graphical displays, they are unnecessarily overlooked. The advantages of directly modeling LVs are shown with examples of analyses from the ActiveS intervention designed to increase physical activity. PMID:26688834
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
On statistical approaches to climate change analysis
NASA Astrophysics Data System (ADS)
Lee, Terry Chun Kit
are compared based on theoretical grounds and through simulation studies. The two new estimation approaches generally perform better than the existing approach. A number of studies have attempted to reconstruct hemispheric mean temperature for the past millennium from proxy climate indicators. Different statistical methods are used in these studies and it therefore seems natural to ask which method is more reliable. An empirical comparison between the different reconstruction methods is considered using both climate model data and real-world paleoclimate proxy data. The proposed state-space model approach and the RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing decadal temperature variability. The similarity in performance provides evidence that the difference between many real-world reconstructions is more likely to be due to the choice of the proxy series, or the use of difference target seasons or latitudes, than to the choice of statistical method.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Phase statistics approach to human ventricular fibrillation
NASA Astrophysics Data System (ADS)
Wu, Ming-Chya; Watanabe, Eiichi; Struzik, Zbigniew R.; Hu, Chin-Kun; Yamamoto, Yoshiharu
2009-11-01
Ventricular fibrillation (VF) is known to be the most dangerous cardiac arrhythmia, frequently leading to sudden cardiac death (SCD). During VF, cardiac output drops to nil and, unless the fibrillation is promptly halted, death usually ensues within minutes. While delivering life saving electrical shocks is a method of preventing SCD, it has been recognized that some, though not many, VF episodes are self-terminating, and understanding the mechanism of spontaneous defibrillation might provide newer therapeutic options for treatment of this otherwise fatal arrhythmia. Using the phase statistics approach, recently developed to study financial and physiological time series, here, we reveal the timing characteristics of transient features of ventricular tachyarrhythmia (mostly VF) electrocardiogram (ECG) and find that there are three distinct types of probability density function (PDF) of phase distributions: uniform (UF), concave (CC), and convex (CV). Our data show that VF patients with UF or CC types of PDF have approximately the same probability of survival and nonsurvival, while VF patients with CV type PDF have zero probability of survival, implying that their VF episodes are never self-terminating. Our results suggest that detailed phase statistics of human ECG data may be a key to understanding the mechanism of spontaneous defibrillation of fatal VF.
Intelligence and embodiment: a statistical mechanics approach.
Chinea, Alejandro; Korutcheva, Elka
2013-04-01
Evolutionary neuroscience has been mainly dominated by the principle of phylogenetic conservation, specifically, by the search for similarities in brain organization. This principle states that closely related species tend to be similar because they have a common ancestor. However, explaining, for instance, behavioral differences between humans and chimpanzees, has been revealed to be notoriously difficult. In this paper, the hypothesis of a common information-processing principle exploited by the brains evolved through natural evolution is explored. A model combining recent advances in cognitive psychology and evolutionary neuroscience is presented. The macroscopic effects associated with the intelligence-like structures postulated by the model are analyzed from a statistical mechanics point of view. As a result of this analysis, some plausible explanations are put forward concerning the disparities and similarities in cognitive capacities which are observed in nature across species. Furthermore, an interpretation on the efficiency of brain's computations is also provided. These theoretical results and their implications against modern theories of intelligence are shown to be consistent with the formulated hypothesis.
Uncertainty quantification approaches for advanced reactor analyses.
Briggs, L. L.; Nuclear Engineering Division
2009-03-24
The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.
Project T.E.A.M. (Technical Education Advancement Modules). Advanced Statistical Process Control.
ERIC Educational Resources Information Center
Dunlap, Dale
This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 20-hour advanced statistical process control (SPC) and quality improvement course designed to develop the following competencies: (1) understanding quality systems; (2) knowing the process; (3) solving quality problems; and (4)…
ERIC Educational Resources Information Center
McGrath, April L.; Ferns, Alyssa; Greiner, Leigh; Wanamaker, Kayla; Brown, Shelley
2015-01-01
In this study we assessed the usefulness of a multifaceted teaching framework in an advanced statistics course. We sought to expand on past findings by using this framework to assess changes in anxiety and self-efficacy, and we collected focus group data to ascertain whether students attribute such changes to a multifaceted teaching approach.…
[Bayesian statistic: an approach fitted to clinic].
Meyer, N; Vinzio, S; Goichot, B
2009-03-01
Bayesian statistic has known a growing success though quite limited. This is surprising since Bayes' theorem on which this paradigm relies is frequently used by the clinicians. There is a direct link between the routine diagnostic test and the Bayesian statistic. This link is the Bayes' theorem which allows one to compute positive and negative predictive values of a test. The principle of this theorem is extended to simple statistical situations as an introduction to Bayesian statistic. The conceptual simplicity of Bayesian statistic should make for a greater acceptance in the biomedical world.
A Statistical Approach to Automatic Speech Summarization
NASA Astrophysics Data System (ADS)
Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex
2003-12-01
This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the
Statistical physics approaches to Alzheimer's disease
NASA Astrophysics Data System (ADS)
Peng, Shouyong
Alzheimer's disease (AD) is the most common cause of late life dementia. In the brain of an AD patient, neurons are lost and spatial neuronal organizations (microcolumns) are disrupted. An adequate quantitative analysis of microcolumns requires that we automate the neuron recognition stage in the analysis of microscopic images of human brain tissue. We propose a recognition method based on statistical physics. Specifically, Monte Carlo simulations of an inhomogeneous Potts model are applied for image segmentation. Unlike most traditional methods, this method improves the recognition of overlapped neurons, and thus improves the overall recognition percentage. Although the exact causes of AD are unknown, as experimental advances have revealed the molecular origin of AD, they have continued to support the amyloid cascade hypothesis, which states that early stages of aggregation of amyloid beta (Abeta) peptides lead to neurodegeneration and death. X-ray diffraction studies reveal the common cross-beta structural features of the final stable aggregates-amyloid fibrils. Solid-state NMR studies also reveal structural features for some well-ordered fibrils. But currently there is no feasible experimental technique that can reveal the exact structure or the precise dynamics of assembly and thus help us understand the aggregation mechanism. Computer simulation offers a way to understand the aggregation mechanism on the molecular level. Because traditional all-atom continuous molecular dynamics simulations are not fast enough to investigate the whole aggregation process, we apply coarse-grained models and discrete molecular dynamics methods to increase the simulation speed. First we use a coarse-grained two-bead (two beads per amino acid) model. Simulations show that peptides can aggregate into multilayer beta-sheet structures, which agree with X-ray diffraction experiments. To better represent the secondary structure transition happening during aggregation, we refine the
BEADS: A Realistic Approach to Elementary Statistics.
ERIC Educational Resources Information Center
Gamble, Andy
1983-01-01
Having students gather their own statistics is promoted. The BEADS program provides an alternative; it simulated sampling from a binomial distribution. Illustrations from the program are included. (MNS)
A statistical mechanics approach to Granovetter theory
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2012-05-01
In this paper we try to bridge breakthroughs in quantitative sociology/econometrics, pioneered during the last decades by Mac Fadden, Brock-Durlauf, Granovetter and Watts-Strogatz, by introducing a minimal model able to reproduce essentially all the features of social behavior highlighted by these authors. Our model relies on a pairwise Hamiltonian for decision-maker interactions which naturally extends the multi-populations approaches by shifting and biasing the pattern definitions of a Hopfield model of neural networks. Once introduced, the model is investigated through graph theory (to recover Granovetter and Watts-Strogatz results) and statistical mechanics (to recover Mac-Fadden and Brock-Durlauf results). Due to the internal symmetries of our model, the latter is obtained as the relaxation of a proper Markov process, allowing even to study its out-of-equilibrium properties. The method used to solve its equilibrium is an adaptation of the Hamilton-Jacobi technique recently introduced by Guerra in the spin-glass scenario and the picture obtained is the following: shifting the patterns from [-1,+1]→[0.+1] implies that the larger the amount of similarities among decision makers, the stronger their relative influence, and this is enough to explain both the different role of strong and weak ties in the social network as well as its small-world properties. As a result, imitative interaction strengths seem essentially a robust request (enough to break the gauge symmetry in the couplings), furthermore, this naturally leads to a discrete choice modelization when dealing with the external influences and to imitative behavior à la Curie-Weiss as the one introduced by Brock and Durlauf.
A statistical approach to root system classification
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
Statistical physics approaches to financial fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong
2009-12-01
Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation
A Hierarchical Statistic Methodology for Advanced Memory System Evaluation
Sun, X.-J.; He, D.; Cameron, K.W.; Luo, Y.
1999-04-12
Advances in technology have resulted in a widening of the gap between computing speed and memory access time. Data access time has become increasingly important for computer system design. Various hierarchical memory architectures have been developed. The performance of these advanced memory systems, however, varies with applications and problem sizes. How to reach an optimal cost/performance design eludes researchers still. In this study, the authors introduce an evaluation methodology for advanced memory systems. This methodology is based on statistical factorial analysis and performance scalability analysis. It is two fold: it first determines the impact of memory systems and application programs toward overall performance; it also identifies the bottleneck in a memory hierarchy and provides cost/performance comparisons via scalability analysis. Different memory systems can be compared in terms of mean performance or scalability over a range of codes and problem sizes. Experimental testing has been performed extensively on the Department of Energy's Accelerated Strategic Computing Initiative (ASCI) machines and benchmarks available at the Los Alamos National Laboratory to validate this newly proposed methodology. Experimental and analytical results show this methodology is simple and effective. It is a practical tool for memory system evaluation and design. Its extension to general architectural evaluation and parallel computer systems are possible and should be further explored.
Statistical approach for supervised codeword selection
NASA Astrophysics Data System (ADS)
Park, Kihong; Ryu, Seungchul; Kim, Seungryong; Sohn, Kwanghoon
2015-01-01
Bag-of-words (BoW) is one of the most successful methods for object categorization. This paper proposes a statistical codeword selection algorithm where the best subset is selected from the initial codewords based on the statistical characteristics of codewords. For this purpose, we defined two types of codeword-confidences: cross- and within-category confidences. The cross- and within-category confidences eliminate indistinctive codewords across categories and inconsistent codewords within each category, respectively. An informative subset of codewords is then selected based on these two codeword-confidences. The experimental evaluation for a scene categorization dataset and a Caltech-101 dataset shows that the proposed method improves the categorization performance up to 10% in terms of error rate reduction when cooperated with BoW, sparse coding (SC), and locality-constrained liner coding (LLC). Furthermore, the codeword size is reduced by 50% leading a low computational complexity.
An Alternative Approach to Quantum Statistics,
1983-06-21
by block wsimb-’r)9 " Quantum Statistics, Fermi -Dirac, Bose-Einstein __j_ __ _ _ __ _ _ _ L _ 20. AE’STYRCT (Continue on roeraisde It nocosaary and...identify by block numbor) The Fermi -Dirac, Bose-Einstein and, for completeness the ri1axw~ell-Boltzniant’ C-1" distributions are obtained respectively...D.C. 20375 and A. K. Rajagopal Department of Physics and Astronomy, Louisiana State University Baton Rouge, LA 70803-4001 Abstract The Fermi -Dirac
Supersymmetric Liouville theory: A statistical mechanical approach
Barrozo, M.C.; Belvedere, L.V.
1996-02-01
The statistical mechanical system associated with the two-dimensional supersymmetric Liouville theory is obtained through an infrared-finite perturbation expansion. Considering the system confined in a finite volume and in the presence of a uniform neutralizing background, we show that the grand-partition function of this system describes a one-component gas, in which the Boltzmann factor is weighted by an integration over the Grassmann variables. This weight function introduces the dimensional reduction phenomenon. After performing the thermodynamic limit, the resulting supersymmetric quantum theory is translationally invariant. {copyright} {ital 1996 The American Physical Society.}
Statistical mechanical approach to human language
NASA Astrophysics Data System (ADS)
Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos
2006-07-01
We use the formulation of equilibrium statistical mechanics in order to study some important characteristics of language. Using a simple expression for the Hamiltonian of a language system, which is directly implied by the Zipf law, we are able to explain several characteristic features of human language that seem completely unrelated, such as the universality of the Zipf exponent, the vocabulary size of children, the reduced communication abilities of people suffering from schizophrenia, etc. While several explanations are necessarily only qualitative at this stage, we have, nevertheless, been able to derive a formula for the vocabulary size of children as a function of age, which agrees rather well with experimental data.
Statistical Physics Approach to Political Districting Problem
NASA Astrophysics Data System (ADS)
Chou, Chung-I.; Li, Sai-Ping
The Political Districting Problem is to partition a zone into several electoral districts subject to some constraints such as contiguity, population equality, etc. In this paper, we apply statistical physics methods to Political Districting Problem. This political problem is mapped to a q-state Potts model system, and the political constraints are written in the form of an energy function with the interactions between sites or external fields acting on the system. Districting into q voter districts is equivalent to finding the ground state of this q-state Potts model. We illustrate this problem by districting Taipei city and compare it to a computer-generated artificial system.
A Statistical Approach to Passive Target Tracking.
1981-04-01
INSTRUC"OS RE zBEFORE COMPLETING FORMI . REPORT NU Z. GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER TM -311-811 4. TITLE (anfdsub" t) . . .. S. TYPE OF...34 Radio and Electronic Engineer, Vol. 29, pp. 213-222 (1965). 2 M. J. Hinich and P . Shaman, "Parameter Estimation for an R-Dimensional Plane Wave...Observed with Additive Independent Gaussian Errors," The Annals of Mathematical Statistics, Vol. 43, pp. 153-169 (1972). 3V. H. MacDonald and P . M
Statistical inference to advance network models in epidemiology.
Welch, David; Bansal, Shweta; Hunter, David R
2011-03-01
Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data.
A Statistical Approach for Ambiguous Sequence Mappings
Technology Transfer Automated Retrieval System (TEKTRAN)
When attempting to map RNA sequences to a reference genome, high percentages of short sequence reads are often assigned to multiple genomic locations. One approach to handling these “ambiguous mappings” has been to discard them. This results in a loss of data, which can sometimes be as much as 45% o...
Aftershock Energy Distribution by Statistical Mechanics Approach
NASA Astrophysics Data System (ADS)
Daminelli, R.; Marcellini, A.
2015-12-01
The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.
Advanced Safeguards Approaches for New Reprocessing Facilities
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Richard; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-06-24
U.S. efforts to promote the international expansion of nuclear energy through the Global Nuclear Energy Partnership (GNEP) will result in a dramatic expansion of nuclear fuel cycle facilities in the United States. New demonstration facilities, such as the Advanced Fuel Cycle Facility (AFCF), the Advanced Burner Reactor (ABR), and the Consolidated Fuel Treatment Center (CFTC) will use advanced nuclear and chemical process technologies that must incorporate increased proliferation resistance to enhance nuclear safeguards. The ASA-100 Project, “Advanced Safeguards Approaches for New Nuclear Fuel Cycle Facilities,” commissioned by the NA-243 Office of NNSA, has been tasked with reviewing and developing advanced safeguards approaches for these demonstration facilities. Because one goal of GNEP is developing and sharing proliferation-resistant nuclear technology and services with partner nations, the safeguards approaches considered are consistent with international safeguards as currently implemented by the International Atomic Energy Agency (IAEA). This first report reviews possible safeguards approaches for the new fuel reprocessing processes to be deployed at the AFCF and CFTC facilities. Similar analyses addressing the ABR and transuranic (TRU) fuel fabrication lines at AFCF and CFTC will be presented in subsequent reports.
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
An Integrated, Statistical Molecular Approach to the Physical Chemistry Curriculum
ERIC Educational Resources Information Center
Cartier, Stephen F.
2009-01-01
As an alternative to the "thermodynamics first" or "quantum first" approaches to the physical chemistry curriculum, the statistical definition of entropy and the Boltzmann distribution are introduced in the first days of the course and the entire two-semester curriculum is then developed from these concepts. Once the tools of statistical mechanics…
Measuring University Students' Approaches to Learning Statistics: An Invariance Study
ERIC Educational Resources Information Center
Chiesi, Francesca; Primi, Caterina; Bilgin, Ayse Aysin; Lopez, Maria Virginia; del Carmen Fabrizio, Maria; Gozlu, Sitki; Tuan, Nguyen Minh
2016-01-01
The aim of the current study was to provide evidence that an abbreviated version of the Approaches and Study Skills Inventory for Students (ASSIST) was invariant across different languages and educational contexts in measuring university students' learning approaches to statistics. Data were collected on samples of university students attending…
Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers
ERIC Educational Resources Information Center
Keiffer, Greggory L.; Lane, Forrest C.
2016-01-01
Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…
A Statistical Approach to Relaxation in Glassy Materials.
1984-11-01
Approach to Relaxation in Glassy Materials bya by DTIC Et. ECTE Karina Weron M A. and S "z’ Aleksander Weron ApptovC fcv - " Technical Report No. 82...STATISTICAL APPROACH TO RELAXATION IN GLASSY MATERIALS 12. PERSONAL AUTHOR(S) Karina Weron and Aleksander Weron 13.. TYPE OF REPORT 13b. TIME COVERED 14. DATE...CLASSIFICATION OF THIS PAGE A Statistical Approach to Relaxation in Glassy Materials1 Karina Weron Institute of Physics Technical University of Wroclaw 50-370
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
A statistical mechanics approach to mixing in stratified fluids
NASA Astrophysics Data System (ADS)
Venaille, A.; Gostiaux, L.; Sommeria, J.
2017-01-01
Predicting how much mixing occurs when a given amount of energy is injected into a Boussinesq fluid is a longstanding problem in stratified turbulence. The huge number of degrees of freedom involved in those processes renders extremely difficult a deterministic approach to the problem. Here we present a statistical mechanics approach yielding prediction for a cumulative, global mixing efficiency as a function of a global Richardson number and the background buoyancy profile.
Reconciling Statistical and Systems Science Approaches to Public Health
ERIC Educational Resources Information Center
Ip, Edward H.; Rahmandad, Hazhir; Shoham, David A.; Hammond, Ross; Huang, Terry T. -K.; Wang, Youfa; Mabry, Patricia L.
2013-01-01
Although systems science has emerged as a set of innovative approaches to study complex phenomena, many topically focused researchers including clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which…
Recent progress in the statistical approach of parton distributions
Soffer, Jacques
2011-07-15
We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.
A statistics-guided approach to precise characterization of nanowire morphology.
Wang, Fei; Hwang, Youngdeok; Qian, Peter Z G; Wang, Xudong
2010-02-23
Precise control of nanomaterial morphology is critical to the development of advanced nanodevices with various functionalities. In this paper, we developed an efficient and effective statistics-guided approach to accurately characterizing the lengths, diameters, orientations, and densities of nanowires. Our approach has been successfully tested on a zinc oxide nanowire sample grown by hydrothermal methods. This approach has three key components. First, we introduced a novel geometric model to recover the true lengths and orientations of nanowires from their projective scanning electron microscope images, where a statistical resampling method is used to mitigate the practical difficulty of relocating the same sets of nanowires at multiple projecting angles. Second, we developed a sequential uniform sampling method for efficiently acquiring representative samples in characterizing diameters and growing density. Third, we proposed a statistical imputation method to incorporate the uncertainty in the determination of nanowire diameters arising from nonspherical cross-section spinning. This approach enables precise characterization of several fundamental aspects of nanowire morphology, which served as an excellent example to overcome nanoscale characterization challenges by using novel statistical means. It might open new opportunities in advancing nanotechnology and might also lead to the standardization of nanocharacterization in many aspects.
Map of isotachs - statistical approach and meteorological information transfer
Menezes, A.A.; da Silva, J.I.; Coutinho, C.E.O.
1985-09-01
This report gives a statistical treatment of available wind data from airports in Brazil and provides a map of isotachs for extreme yearly wind velocities. A comparison between the statistical models of Frechet and Gumbel is carried out, leading to the adoption of the latter. The low density of meteorological stations used in this approach restricts the knowledge of wind activity. This fact was accounted for in the analytical method for spatial transfer of climatic data. Recommendations are given on how to enlarge the amount of available data.
Shukla, R.; Yu Daohai; Fulk, F.
1995-12-31
Short-term toxicity tests with aquatic organisms are a valuable measurement tool in the assessment of the toxicity of effluents, environmental samples and single chemicals. Currently toxicity tests are utilized in a wide range of US EPA regulatory activities including effluent discharge compliance. In the current approach for determining the No Observed Effect Concentration, an effluent concentration is presumed safe if there is no statistically significant difference in toxicant response versus control response. The conclusion of a safe concentration may be due to the fact that it truly is safe, or alternatively, that the ability of the statistical test to detect an effect, given its existence, is inadequate. Results of research of a new statistical approach, the basis of which is to move away from a demonstration of no difference to a demonstration of equivalence, will be discussed. The concept of observed confidence distributions, first suggested by Cox, is proposed as a measure of the strength of evidence for practically equivalent responses between a given effluent concentration and the control. The research included determination of intervals of practically equivalent responses as a function of the variability of control response. The approach is illustrated using reproductive data from tests with Ceriodaphnia dubia and survival and growth data from tests with fathead minnow. The data are from the US EPA`s National Reference Toxicant Database.
Integration of Advanced Statistical Analysis Tools and Geophysical Modeling
2010-12-01
later in this section. 2) San Luis Obispo . Extracted features were also provided for MTADS EM61, MTADS magnetics, EM61 cart, and TEMTADS data sets from...subsequent training of statistical classifiers using these features. Results of discrimination studies at Camp Sibert and San Luis Obispo have shown...Comparison of classification performance Figures 10 through 13 show receiver operating characteristics for data sets acquired at San Luis Obispo . Subplot
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
A new statistical approach to climate change detection and attribution
NASA Astrophysics Data System (ADS)
Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe
2017-01-01
We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the " models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).
Primordial statistical anisotropies: the effective field theory approach
Abolhasani, Ali Akbar; Akhshik, Mohammad; Emami, Razieh; Firouzjahi, Hassan E-mail: m.akhshik@ipm.ir E-mail: firouz@ipm.ir
2016-03-01
In this work we present the effective field theory of primordial statistical anisotropies generated during anisotropic inflation involving a background U(1) gauge field. Besides the usual Goldstone boson associated with the breaking of time diffeomorphism we have two additional Goldstone bosons associated with the breaking of spatial diffeomorphisms. We further identify these two new Goldstone bosons with the expected two transverse degrees of the U(1) gauge field fluctuations. Upon defining the appropriate unitary gauge, we present the most general quadratic action which respects the remnant symmetry in the unitary gauge. The interactions between various Goldstone bosons leads to statistical anisotropy in curvature perturbation power spectrum. Calculating the general results for power spectrum anisotropy, we recover the previously known results in specific models of anisotropic inflation. In addition, we present novel results for statistical anisotropy in models with non-trivial sound speed for inflaton fluctuations. Also we identify the interaction which leads to birefringence-like effects in anisotropic power spectrum in which the speed of gauge field fluctuations depends on the direction of the mode propagation and the two polarization of gauge field fluctuations contribute differently in statistical anisotropy. As another interesting application, our EFT approach naturally captures interactions generating parity violating statistical anisotropies.
A statistical combustion phase control approach of SI engines
NASA Astrophysics Data System (ADS)
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2017-02-01
In order to maximize the performance of internal combustion engine, combustion phase is usually controlled to track its desired reference. However, suffering from the cyclic variability of combustion, it is difficulty but meaningful to control mean of combustion phase and constrain its variance. As a combustion phase indicator, the location of peak pressure (LPP) is utilized for real-time combustion phase control in this research. The purpose of the proposed method is to ensure the mean of LPP statistically tracks its reference and constrains the standard deviation of LPP distribution. To achieve this, LPP is first calculated based on the cylinder pressure sensor, and its characteristics are analyzed at the steady-state operating condition, then the distribution of LPP is examined online using hypothesis test criterion. On the basis of the presented statistical algorithm, current mean of LPP is applied in the feedback channel for designing spark advance adjustment law, and the stability of closed-loop system is theoretically ensured according to a steady statistical model. Finally, the proposed strategy is verified on a spark ignition gasoline engine.
Advanced Safeguards Approaches for New Fast Reactors
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Rick L.; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-12-15
This third report in the series reviews possible safeguards approaches for new fast reactors in general, and the ABR in particular. Fast-neutron spectrum reactors have been used since the early 1960s on an experimental and developmental level, generally with fertile blanket fuels to “breed” nuclear fuel such as plutonium. Whether the reactor is designed to breed plutonium, or transmute and “burn” actinides depends mainly on the design of the reactor neutron reflector and the whether the blanket fuel is “fertile” or suitable for transmutation. However, the safeguards issues are very similar, since they pertain mainly to the receipt, shipment and storage of fresh and spent plutonium and actinide-bearing “TRU”-fuel. For these reasons, the design of existing fast reactors and details concerning how they have been safeguarded were studied in developing advanced safeguards approaches for the new fast reactors. In this regard, the design of the Experimental Breeder Reactor-II “EBR-II” at the Idaho National Laboratory (INL) was of interest, because it was designed as a collocated fast reactor with a pyrometallurgical reprocessing and fuel fabrication line – a design option being considered for the ABR. Similarly, the design of the Fast Flux Facility (FFTF) on the Hanford Site was studied, because it was a successful prototype fast reactor that ran for two decades to evaluate fuels and the design for commercial-scale fast reactors.
Advances on interdisciplinary approaches to urban carbon
NASA Astrophysics Data System (ADS)
Romero-Lankao, P.
2015-12-01
North American urban areas are emerging as climate policy and technology innovators, urbanization process laboratories, fonts of carbon relevant experiments, hubs for grass-roots mobilization, and centers for civil-society experiments to curb carbon emissions and avoid widespread and irreversible climate impacts. Since SOCCR diverse lines of inquiry on urbanization, urban areas and the carbon cycle have advanced our understanding of some of the societal processes through which energy and land uses affect carbon. This presentation provides an overview of these diverse perspectives. It suggests the need for approaches that complement and combine the plethora of existing insights into interdisciplinary explorations of how different urbanization processes, and socio-ecological and technological components of urban areas affect the spatial and temporal patterns of carbon emissions, differentially over time and within and across cities. It also calls for a more holistic approach to examining the carbon implications of urbanization and urban areas as places, based not only on demographics or income, but also on such other interconnected features of urban development pathways as urban form, economic function, economic growth policies and climate policies.
Turbo recognition: a statistical approach to layout analysis
NASA Astrophysics Data System (ADS)
Tokuyasu, Taku A.; Chou, Philip A.
2000-12-01
Turbo recognition (TR) is a communication theory approach to the analysis of rectangular layouts, in the spirit of Document Image Decoding. The TR algorithm, inspired by turbo decoding, is based on a generative model of image production, in which two grammars are used simultaneously to describe structure in orthogonal (horizontal and vertical directions. This enables TR to strictly embody non-local constraints that cannot be taken into account by local statistical methods. This basis in finite state grammars also allows TR to be quickly retargetable to new domains. We illustrate some of the capabilities of TR with two examples involving realistic images. While TR, like turbo decoding, is not guaranteed to recover the statistically optimal solution, we present an experiment that demonstrates its ability to produce optimal or near-optimal results on a simple yet nontrivial example, the recovery of a filled rectangle in the midst of noise. Unlike methods such as stochastic context free grammars and exhaustive search, which are often intractable beyond small images, turbo recognition scales linearly with image size, suggesting TR as an efficient yet near-optimal approach to statistical layout analysis.
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Defining statistical perceptions with an empirical Bayesian approach
NASA Astrophysics Data System (ADS)
Tajima, Satohiro
2013-04-01
Extracting statistical structures (including textures or contrasts) from a natural stimulus is a central challenge in both biological and engineering contexts. This study interprets the process of statistical recognition in terms of hyperparameter estimations and free-energy minimization procedures with an empirical Bayesian approach. This mathematical interpretation resulted in a framework for relating physiological insights in animal sensory systems to the functional properties of recognizing stimulus statistics. We applied the present theoretical framework to two typical models of natural images that are encoded by a population of simulated retinal neurons, and demonstrated that the resulting cognitive performances could be quantified with the Fisher information measure. The current enterprise yielded predictions about the properties of human texture perception, suggesting that the perceptual resolution of image statistics depends on visual field angles, internal noise, and neuronal information processing pathways, such as the magnocellular, parvocellular, and koniocellular systems. Furthermore, the two conceptually similar natural-image models were found to yield qualitatively different predictions, striking a note of warning against confusing the two models when describing a natural image.
Assessing risk factors for dental caries: a statistical modeling approach.
Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella
2015-01-01
The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.
The statistical multifragmentation model: Origins and recent advances
NASA Astrophysics Data System (ADS)
Donangelo, R.; Souza, S. R.
2016-07-01
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
Callahan, Charles D; Griffen, David L
2003-08-01
Emergency medicine faces unique challenges in the effort to improve efficiency and effectiveness. Increased patient volumes, decreased emergency department (ED) supply, and an increased emphasis on the ED as a diagnostic center have contributed to poor customer satisfaction and process failures such as diversion/bypass. Statistical process control (SPC) techniques developed in industry offer an empirically based means to understand our work processes and manage by fact. Emphasizing that meaningful quality improvement can occur only when it is exercised by "front-line" providers, this primer presents robust yet accessible SPC concepts and techniques for use in today's ED.
Statistically Based Approach to Broadband Liner Design and Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M. (Inventor); Jones, Michael G. (Inventor)
2016-01-01
A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.
Statistical approach of weakly nonlinear ablative Rayleigh-Taylor instability
Garnier, J.; Masse, L.
2005-06-15
A weakly nonlinear model is proposed for the Rayleigh-Taylor instability in presence of ablation and thermal transport. The nonlinear effects for a single-mode disturbance are computed, included the nonlinear correction to the exponential growth of the fundamental modulation. Mode coupling in the spectrum of a multimode disturbance is thoroughly analyzed by a statistical approach. The exponential growth of the linear regime is shown to be reduced by the nonlinear mode coupling. The saturation amplitude is around 0.1{lambda} for long wavelengths, but higher for short instable wavelengths in the ablative regime.
Statistical Approaches for the Study of Cognitive and Brain Aging
Chen, Huaihou; Zhao, Bingxin; Cao, Guanqun; Proges, Eric C.; O'Shea, Andrew; Woods, Adam J.; Cohen, Ronald A.
2016-01-01
Neuroimaging studies of cognitive and brain aging often yield massive datasets that create many analytic and statistical challenges. In this paper, we discuss and address several limitations in the existing work. (1) Linear models are often used to model the age effects on neuroimaging markers, which may be inadequate in capturing the potential nonlinear age effects. (2) Marginal correlations are often used in brain network analysis, which are not efficient in characterizing a complex brain network. (3) Due to the challenge of high-dimensionality, only a small subset of the regional neuroimaging markers is considered in a prediction model, which could miss important regional markers. To overcome those obstacles, we introduce several advanced statistical methods for analyzing data from cognitive and brain aging studies. Specifically, we introduce semiparametric models for modeling age effects, graphical models for brain network analysis, and penalized regression methods for selecting the most important markers in predicting cognitive outcomes. We illustrate these methods using the healthy aging data from the Active Brain Study. PMID:27486400
Learning the Language of Statistics: Challenges and Teaching Approaches
ERIC Educational Resources Information Center
Dunn, Peter K.; Carey, Michael D.; Richardson, Alice M.; McDonald, Christine
2016-01-01
Learning statistics requires learning the language of statistics. Statistics draws upon words from general English, mathematical English, discipline-specific English and words used primarily in statistics. This leads to many linguistic challenges in teaching statistics and the way in which the language is used in statistics creates an extra layer…
STATISTICS OF DARK MATTER HALOS FROM THE EXCURSION SET APPROACH
Lapi, A.; Salucci, P.; Danese, L.
2013-08-01
We exploit the excursion set approach in integral formulation to derive novel, accurate analytic approximations of the unconditional and conditional first crossing distributions for random walks with uncorrelated steps and general shapes of the moving barrier; we find the corresponding approximations of the unconditional and conditional halo mass functions for cold dark matter (DM) power spectra to represent very well the outcomes of state-of-the-art cosmological N-body simulations. In addition, we apply these results to derive, and confront with simulations, other quantities of interest in halo statistics, including the rates of halo formation and creation, the average halo growth history, and the halo bias. Finally, we discuss how our approach and main results change when considering random walks with correlated instead of uncorrelated steps, and warm instead of cold DM power spectra.
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
The Precautionary Principle and statistical approaches to uncertainty.
Keiding, Niels; Budtz-Jørgensen, Esben
2004-01-01
The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to "prove the null hypothesis". Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the no observed adverse effect level, the Benchmark approach and the "Hockey Stick" model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is, however, often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of the Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Multilayer Approach for Advanced Hybrid Lithium Battery.
Ming, Jun; Li, Mengliu; Kumar, Pushpendra; Li, Lain-Jong
2016-06-28
Conventional intercalated rechargeable batteries have shown their capacity limit, and the development of an alternative battery system with higher capacity is strongly needed for sustainable electrical vehicles and hand-held devices. Herein, we introduce a feasible and scalable multilayer approach to fabricate a promising hybrid lithium battery with superior capacity and multivoltage plateaus. A sulfur-rich electrode (90 wt % S) is covered by a dual layer of graphite/Li4Ti5O12, where the active materials S and Li4Ti5O12 can both take part in redox reactions and thus deliver a high capacity of 572 mAh gcathode(-1) (vs the total mass of electrode) or 1866 mAh gs(-1) (vs the mass of sulfur) at 0.1C (with the definition of 1C = 1675 mA gs(-1)). The battery shows unique voltage platforms at 2.35 and 2.1 V, contributed from S, and 1.55 V from Li4Ti5O12. A high rate capability of 566 mAh gcathode(-1) at 0.25C and 376 mAh gcathode(-1) at 1C with durable cycle ability over 100 cycles can be achieved. Operando Raman and electron microscope analysis confirm that the graphite/Li4Ti5O12 layer slows the dissolution/migration of polysulfides, thereby giving rise to a higher sulfur utilization and a slower capacity decay. This advanced hybrid battery with a multilayer concept for marrying different voltage plateaus from various electrode materials opens a way of providing tunable capacity and multiple voltage platforms for energy device applications.
Statistical approaches and software for clustering islet cell functional heterogeneity
Wills, Quin F.; Boothe, Tobias; Asadi, Ali; Ao, Ziliang; Warnock, Garth L.; Kieffer, Timothy J.
2016-01-01
ABSTRACT Worldwide efforts are underway to replace or repair lost or dysfunctional pancreatic β-cells to cure diabetes. However, it is unclear what the final product of these efforts should be, as β-cells are thought to be heterogeneous. To enable the analysis of β-cell heterogeneity in an unbiased and quantitative way, we developed model-free and model-based statistical clustering approaches, and created new software called TraceCluster. Using an example data set, we illustrate the utility of these approaches by clustering dynamic intracellular Ca2+ responses to high glucose in ∼300 simultaneously imaged single islet cells. Using feature extraction from the Ca2+ traces on this reference data set, we identified 2 distinct populations of cells with β-like responses to glucose. To the best of our knowledge, this report represents the first unbiased cluster-based analysis of human β-cell functional heterogeneity of simultaneous recordings. We hope that the approaches and tools described here will be helpful for those studying heterogeneity in primary islet cells, as well as excitable cells derived from embryonic stem cells or induced pluripotent cells. PMID:26909740
Urban pavement surface temperature. Comparison of numerical and statistical approach
NASA Astrophysics Data System (ADS)
Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia
2015-04-01
The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.
Modulational Instability of Cylindrical and Spherical NLS Equations. Statistical Approach
Grecu, A. T.; Grecu, D.; Visinescu, Anca; De Nicola, S.; Fedele, R.
2010-01-21
The modulational (Benjamin-Feir) instability for cylindrical and spherical NLS equations (c/s NLS equations) is studied using a statistical approach (SAMI). A kinetic equation for a two-point correlation function is written and analyzed using the Wigner-Moyal transform. The linear stability of the Fourier transform of the two-point correlation function is studied and an implicit integral form for the dispersion relation is found. This is solved for different expressions of the initial spectrum (delta-spectrum, Lorentzian, Gaussian), and in the case of a Lorentzian spectrum the total growth of the instability is calculated. The similarities and differences with the usual one-dimensional NLS equation are emphasized.
Rate-equation approach to atomic-laser light statistics
Chusseau, Laurent; Arnaud, Jacques; Philippe, Fabrice
2002-11-01
We consider three- and four-level atomic lasers that are either incoherently (unidirectionally) or coherently (bidirectionally) pumped, the single-mode cavity being resonant with the laser transition. The intracavity Fano factor and the photocurrent spectral density are evaluated on the basis of rate equations. According to that approach, fluctuations are caused by jumps in active and detecting atoms. The algebra is simple. Whenever a comparison is made, the expressions obtained coincide with the previous results. The conditions under which the output light exhibits sub-Poissonian statistics are considered in detail. Analytical results, based on linearization, are verified by comparison with Monte Carlo simulations. An essentially exhaustive investigation of sub-Poissonian light generation by three- and four-level lasers has been performed. Only special forms were reported earlier.
On multiparticle statistical approach to the solar wind modeling
NASA Astrophysics Data System (ADS)
Minkova, N. R.
The suggested model of the stationary solar plasma flow is based on the Liouville equation and the assumption that particles have indistinguishable coordinates in the volume of the instrumental resolution scale 1 For the case of collisionless fully ionized hydrogen two-component plasma flow ejected by the Sun this multiparticle model is reduced to the two-particle model 2 The related results for the radial dependences of solar wind density and speed are derived and compared to the observational data References 1 Minkova N R Multiparticle statistical approach to the collisionless solar plasma modeling Izvestija vuzof Physics Russian Physics Journal -2004 V 47 No 10 Special issue on Applied problems of mechanics of continua P 73-80 2 Vsenin Y M Minkova N R Two-particle quasineutral kinetic model of collisionless solar wind Journal of Physics A Mathematical and General - 2003 V 36 Issue 22 P 6215-6220
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-10
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-09
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
A statistical approach for distinguishing hybridization and incomplete lineage sorting.
Joly, Simon; McLenachan, Patricia A; Lockhart, Peter J
2009-08-01
The extent and evolutionary significance of hybridization is difficult to evaluate because of the difficulty in distinguishing hybridization from incomplete lineage sorting. Here we present a novel parametric approach for statistically distinguishing hybridization from incomplete lineage sorting based on minimum genetic distances of a nonrecombining locus. It is based on the idea that the expected minimum genetic distance between sequences from two species is smaller for some hybridization events than for incomplete lineage sorting scenarios. When applied to empirical data sets, distributions can be generated for the minimum interspecies distances expected under incomplete lineage sorting using coalescent simulations. If the observed distance between sequences from two species is smaller than its predicted distribution, incomplete lineage sorting can be rejected and hybridization inferred. We demonstrate the power of the method using simulations and illustrate its application on New Zealand alpine buttercups (Ranunculus). The method is robust and complements existing approaches. Thus it should allow biologists to assess with greater accuracy the importance of hybridization in evolution.
Statistical physics approach to quantifying differences in myelinated nerve fibers
NASA Astrophysics Data System (ADS)
Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene
2014-03-01
We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.
Statistical approach to meteoroid shape estimation based on recovered meteorites
NASA Astrophysics Data System (ADS)
Vinnikov, V.; Gritsevich, M.; Turchak, L.
2014-07-01
Each meteorite sample can provide data on the chemical and physical properties of interplanetary matter. The set of recovered fragments within one meteorite fall can give additional information on the history of its parent asteroid. A reliably estimated meteoroid shape is a valuable input parameter for the atmospheric entry scenario, since the pre-entry mass, terminal meteorite mass, and fireball luminosity are proportional to the pre-entry shape factor of the meteoroid to the power of 3 [1]. We present a statistical approach to the estimation of meteoroid pre-entry shape [2], applied to the detailed data on recovered meteorite fragments. This is a development of our recent study on the fragment mass distribution functions for the Košice meteorite fall [3]. The idea of the shape estimation technique is based on experiments that show that brittle fracturing produces multiple fragments of sizes smaller than or equal to the smallest dimension of the body [2]. Such shattering has fractal properties similar to many other natural phenomena [4]. Thus, this self-similarity for scaling mass sequences can be described by the power law statistical expressions [5]. The finite mass and the number of fragments N are represented via an exponential cutoff for the maximum fragment mass m_U. The undersampling of tiny unrecoverable fragments is handled via an additional constraint on the minimum fragment mass m_L. The complementary cumulative distribution function has the form F( m)={N-j}/{m_j}( {m}/{m_j})^{-β_0}exp( {m-m_j}/{m_U}). The resulting parameters sought (scaling exponent β_0 and mass limits) are computed to fit the empirical fragment mass distribution: S(β_0, j, m_U) = sum_{i=j}^{N}[F(m_i)-{N-j}/{m_j}]^2, m_j = m_L. The scaling exponent correlates with the dimensionless shape parameter d [2]: 0.13d^2-0.21d+1.1-β=0, which, in turn, is expressed via the ratio of the linear dimensions a, b, c of the shattering body [2]: d = 1+2(ab+ac+bc)(a^2+b^2+c^2)^{-1}. We apply the
Statistical Approach to the Transformation of Fly Ash into Zeolites
NASA Astrophysics Data System (ADS)
Derkowski, Arkadiusz; Michalik, Marek
2007-01-01
The experimental conversion of F-class fly ash into zeolites is described. The ash, composed mainly of aluminosilicate glass, mullite and quartz, was collected in the Cracow power plant (southern Poland). The experiments involved the heating of fly ash samples in PTFE vessels. Time, temperature and solution composition were the reaction parameters considered in the experiments and in the subsequent modeling. A series of reactions with 0.5, 3 and 5M NaOH solutions (and some with additional 3M NaCl) were carried out at 70°, 100° and 150°C for 12-48 hours under autogenic pressure (not measured) and at a constant ash-to-solution ratio of 33.3 g/l. The following zeolite phases were synthesized: sodalite (SOD structure), hydroxysodalite (SOD), CAN type phases, Na-X (FAU), and NaP1 (GIS). Statistically calculated relationships based on the mineral- and chemical compositions of the reaction products support the conclusion that the type of zeolite phase that crystallizes depends on the concentration of OH- and Cl- in solution and on the temperature of the reaction. The duration of reaction, if on the order of tens of hours, is of less significance. The nature of the zeolite phase that crystalises is controlled by the intensity and selectivity of the substrate dissolution. That dissolution can favour, in sequence, one or other of the components in the substrate, resulting in Si/Al variation in the reaction solutions. Mullite dissolution (decreasing solution Si/Al) characterizes the most advanced reaction stages. The sequence of crystallization of the zeolite phases mirrors the sequential dissolution of substrate components, and the composition of the crystallizing zeolite crystals reflects the changes in the solution Si/Al.
New Statistical Approach to the Analysis of Hierarchical Data
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Riva, M.
2014-12-01
Many variables possess a hierarchical structure reflected in how their increments vary in space and/or time. Quite commonly the increments (a) fluctuate in a highly irregular manner; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails that often decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (ESS) at all lags; and (e) display nonlinear scaling of power-law exponent with order of sample structure function. Some interpret this to imply that the variables are multifractal, which explains neither breakdowns in power-law scaling nor ESS. We offer an alternative interpretation consistent with all above phenomena. It views data as samples from stationary, anisotropic sub-Gaussian random fields subordinated to truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn). The fields are scaled Gaussian mixtures with random variances. Truncation of fBm and fGn entails filtering out components below data measurement or resolution scale and above domain scale. Our novel interpretation of the data allows us to obtain maximum likelihood estimates of all parameters characterizing the underlying truncated sub-Gaussian fields. These parameters in turn make it possible to downscale or upscale all statistical moments to situations entailing smaller or larger measurement or resolution and sampling scales, respectively. They also allow one to perform conditional or unconditional Monte Carlo simulations of random field realizations corresponding to these scales. Aspects of our approach are illustrated on field and laboratory measured porous and fractured rock permeabilities, as well as soil texture characteristics and neural network estimates of unsaturated hydraulic parameters in a deep vadose zone near Phoenix, Arizona. We also use our approach
ERIC Educational Resources Information Center
Hassan, Mahamood M.; Schwartz, Bill N.
2014-01-01
This paper discusses a student research project that is part of an advanced cost accounting class. The project emphasizes active learning, integrates cost accounting with macroeconomics and statistics by "learning by doing" using real world data. Students analyze sales data for a publicly listed company by focusing on the company's…
Heads Up! a Calculation- & Jargon-Free Approach to Statistics
ERIC Educational Resources Information Center
Giese, Alan R.
2012-01-01
Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…
A statistical state dynamics approach to wall turbulence
NASA Astrophysics Data System (ADS)
Farrell, B. F.; Gayme, D. F.; Ioannou, P. J.
2017-03-01
This paper reviews results obtained using statistical state dynamics (SSD) that demonstrate the benefits of adopting this perspective for understanding turbulence in wall-bounded shear flows. The SSD approach used in this work employs a second-order closure that retains only the interaction between the streamwise mean flow and the streamwise mean perturbation covariance. This closure restricts nonlinearity in the SSD to that explicitly retained in the streamwise constant mean flow together with nonlinear interactions between the mean flow and the perturbation covariance. This dynamical restriction, in which explicit perturbation-perturbation nonlinearity is removed from the perturbation equation, results in a simplified dynamics referred to as the restricted nonlinear (RNL) dynamics. RNL systems, in which a finite ensemble of realizations of the perturbation equation share the same mean flow, provide tractable approximations to the SSD, which is equivalent to an infinite ensemble RNL system. This infinite ensemble system, referred to as the stochastic structural stability theory system, introduces new analysis tools for studying turbulence. RNL systems provide computationally efficient means to approximate the SSD and produce self-sustaining turbulence exhibiting qualitative features similar to those observed in direct numerical simulations despite greatly simplified dynamics. The results presented show that RNL turbulence can be supported by as few as a single streamwise varying component interacting with the streamwise constant mean flow and that judicious selection of this truncated support or `band-limiting' can be used to improve quantitative accuracy of RNL turbulence. These results suggest that the SSD approach provides new analytical and computational tools that allow new insights into wall turbulence.
A statistical state dynamics approach to wall turbulence.
Farrell, B F; Gayme, D F; Ioannou, P J
2017-03-13
This paper reviews results obtained using statistical state dynamics (SSD) that demonstrate the benefits of adopting this perspective for understanding turbulence in wall-bounded shear flows. The SSD approach used in this work employs a second-order closure that retains only the interaction between the streamwise mean flow and the streamwise mean perturbation covariance. This closure restricts nonlinearity in the SSD to that explicitly retained in the streamwise constant mean flow together with nonlinear interactions between the mean flow and the perturbation covariance. This dynamical restriction, in which explicit perturbation-perturbation nonlinearity is removed from the perturbation equation, results in a simplified dynamics referred to as the restricted nonlinear (RNL) dynamics. RNL systems, in which a finite ensemble of realizations of the perturbation equation share the same mean flow, provide tractable approximations to the SSD, which is equivalent to an infinite ensemble RNL system. This infinite ensemble system, referred to as the stochastic structural stability theory system, introduces new analysis tools for studying turbulence. RNL systems provide computationally efficient means to approximate the SSD and produce self-sustaining turbulence exhibiting qualitative features similar to those observed in direct numerical simulations despite greatly simplified dynamics. The results presented show that RNL turbulence can be supported by as few as a single streamwise varying component interacting with the streamwise constant mean flow and that judicious selection of this truncated support or 'band-limiting' can be used to improve quantitative accuracy of RNL turbulence. These results suggest that the SSD approach provides new analytical and computational tools that allow new insights into wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'.
A descriptive statistical approach to the Korean pharmacopuncture therapy.
Kim, Jungdae; Kang, Dae-In
2010-09-01
This paper reviews trends in research related to Korean pharmacopuncture therapy. Specifically, basic and clinical research in pharmacopuncture within the last decade is summarized by introducing categorical variables for classification. These variables are also analyzed for association. This literature review is based on articles published from February 1997 to December 2008 in a Korean journal, the Journal of the Korean Institute of Herbal Acupuncture, which was renamed the Journal of the Korean Pharmacopuncture Institute in 2007. Among the total of 379 papers published in the journal during this period, 164 papers were selected for their direct relevance to pharmacopuncture research and were categorized according to three variables: medicinal materials, acupuncture points and disease. The most frequently studied medicinal materials were bee-venom pharmacopuncture (42%), followed by meridian-field pharmacopuncture (24%), single-compound pharmacopuncture (24%), and eight-principle pharmacopuncture (10%). The frequency distributions of the acupuncture points and meridians for the injection of medicinal materials are presented. The most frequently used meridian and acupuncture point was the Bladder meridian and ST36, respectively. Contingency tables are also displayed to analyze the relationship between the categorized variables. Chi-squared analysis showed a significant association between the type of pharmacopuncture and disease. The trend in research reports on Korean pharmacopuncture therapy was reviewed and analyzed using a descriptive statistical approach to evaluate the therapeutic value of this technique for future research.
Statistical physics and physiology: monofractal and multifractal approaches
NASA Technical Reports Server (NTRS)
Stanley, H. E.; Amaral, L. A.; Goldberger, A. L.; Havlin, S.; Peng, C. K.
1999-01-01
Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects.
Probabilistic Forecasting of Surface Ozone with a Novel Statistical Approach
NASA Technical Reports Server (NTRS)
Balashov, Nikolay V.; Thompson, Anne M.; Young, George S.
2017-01-01
The recent change in the Environmental Protection Agency's surface ozone regulation, lowering the surface ozone daily maximum 8-h average (MDA8) exceedance threshold from 75 to 70 ppbv, poses significant challenges to U.S. air quality (AQ) forecasters responsible for ozone MDA8 forecasts. The forecasters, supplied by only a few AQ model products, end up relying heavily on self-developed tools. To help U.S. AQ forecasters, this study explores a surface ozone MDA8 forecasting tool that is based solely on statistical methods and standard meteorological variables from the numerical weather prediction (NWP) models. The model combines the self-organizing map (SOM), which is a clustering technique, with a step wise weighted quadratic regression using meteorological variables as predictors for ozone MDA8. The SOM method identifies different weather regimes, to distinguish between various modes of ozone variability, and groups them according to similarity. In this way, when a regression is developed for a specific regime, data from the other regimes are also used, with weights that are based on their similarity to this specific regime. This approach, regression in SOM (REGiS), yields a distinct model for each regime taking into account both the training cases for that regime and other similar training cases. To produce probabilistic MDA8 ozone forecasts, REGiS weighs and combines all of the developed regression models on the basis of the weather patterns predicted by an NWP model. REGiS is evaluated over the San Joaquin Valley in California and the northeastern plains of Colorado. The results suggest that the model performs best when trained and adjusted separately for an individual AQ station and its corresponding meteorological site.
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor in determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor inmore » determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.« less
Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach
NASA Astrophysics Data System (ADS)
Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.
2010-12-01
Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial
Advancing Instructional Communication: Integrating a Biosocial Approach
ERIC Educational Resources Information Center
Horan, Sean M.; Afifi, Tamara D.
2014-01-01
Celebrating 100 years of the National Communication Association necessitates that, as we commemorate our past, we also look toward our future. As part of a larger conversation about the future of instructional communication, this essay reinvestigates the importance of integrating biosocial approaches into instructional communication research. In…
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Artificial Intelligence Approach to Support Statistical Quality Control Teaching
ERIC Educational Resources Information Center
Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno
2006-01-01
Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Advanced drug delivery approaches against periodontitis.
Joshi, Deeksha; Garg, Tarun; Goyal, Amit K; Rath, Goutam
2016-01-01
Periodontitis is an inflammatory disease of gums involving the degeneration of periodontal ligaments, creation of periodontal pocket and resorption of alveolar bone, resulting in the disruption of the support structure of teeth. According to WHO, 10-15% of the global population suffers from severe periodontitis. The disease results from the growth of a diverse microflora (especially anaerobes) in the pockets and release of toxins, enzymes and stimulation of body's immune response. Various local or systemic approaches were used for an effective treatment of periodontitis. Currently, controlled local drug delivery approach is more favorable as compared to systemic approach because it mainly focuses on improving the therapeutic outcomes by achieving factors like site-specific delivery, low dose requirement, bypass of first-pass metabolism, reduction in gastrointestinal side effects and decrease in dosing frequency. Overall it provides a safe and effective mode of treatment, which enhances patient compliance. Complete eradication of the organisms from the sites was not achieved by using various surgical and mechanical treatments. So a number of polymer-based delivery systems like fibers, films, chips, strips, microparticles, nanoparticles and nanofibers made from a variety of natural and synthetic materials have been successfully tested to deliver a variety of drugs. These systems are biocompatible and biodegradable, completely fill the pockets, and have strong retention on the target site due to excellent mucoadhesion properties. The review summarizes various available and recently developing targeted delivery devices for the treatment of periodontitis.
ERIC Educational Resources Information Center
Petocz, Agnes; Newbery, Glenn
2010-01-01
Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…
An Artificial Intelligence Approach to Analyzing Student Errors in Statistics.
ERIC Educational Resources Information Center
Sebrechts, Marc M.; Schooler, Lael J.
1987-01-01
Describes the development of an artificial intelligence system called GIDE that analyzes student errors in statistics problems by inferring the students' intentions. Learning strategies involved in problem solving are discussed and the inclusion of goal structures is explained. (LRW)
An Alternative Approach to Teaching Statistics to Dental Students.
ERIC Educational Resources Information Center
Hutton, Jack G., Jr.; And Others
1982-01-01
Literature on statistics instruction in dental education indicates course guidelines are available, and computer-assisted instruction is recommended. Self-instruction with programed materials is recommended as an effective and less costly alternative. (Author/MSE)
Advancing Profiling Sensors with a Wireless Approach
Galvis, Alex; Russomanno, David J.
2012-01-01
The notion of a profiling sensor was first realized by a Near-Infrared (N-IR) retro-reflective prototype consisting of a vertical column of wired sparse detectors. This paper extends that prior work and presents a wireless version of a profiling sensor as a collection of sensor nodes. The sensor incorporates wireless sensing elements, a distributed data collection and aggregation scheme, and an enhanced classification technique. In this novel approach, a base station pre-processes the data collected from the sensor nodes and performs data re-alignment. A back-propagation neural network was also developed for the wireless version of the N-IR profiling sensor that classifies objects into the broad categories of human, animal or vehicle with an accuracy of approximately 94%. These enhancements improve deployment options as compared with the first generation of wired profiling sensors, possibly increasing the application scenarios for such sensors, including intelligent fence applications. PMID:23443371
Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods
NASA Astrophysics Data System (ADS)
Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.
2015-02-01
This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.
Model approaches for advancing interprofessional prevention education.
Evans, Clyde H; Cashman, Suzanne B; Page, Donna A; Garr, David R
2011-02-01
Healthy People 2010 included an objective to "increase the proportion of … health professional training schools whose basic curriculum for healthcare providers includes the core competencies in health promotion and disease prevention." Interprofessional prevention education has been seen by the Healthy People Curriculum Task Force as a key strategy for achieving this objective and strengthening prevention content in health professions education programs. To fulfill these aims, the Association for Prevention Teaching and Research sponsored the Institute for Interprofessional Prevention Education in 2007 and in 2008. The institutes were based on the premise that if clinicians from different professions are to function effectively in teams, health professions students need to learn with, from, and about students from other professions. The institutes assembled interprofessional teams of educators from academic health centers across the country and provided instruction in approaches for improving interprofessional prevention education. Interprofessional education also plays a key role in implementation of Healthy People 2020 Education for Health framework. The delivery of preventive services provides a nearly level playing field in which multiple professions each make important contributions. Prevention education should take place during that phase of the educational continuum in which the attitudes, skills, and knowledge necessary for both effective teamwork and prevention are incorporated into the "DNA" of future health professionals. Evaluation of the teams' educational initiatives holds important lessons. These include allowing ample time for planning, obtaining student input during planning, paying explicit attention to teamwork, and taking account of cultural differences across professions.
An Order Statistics Approach to the Halo Model for Galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-01-01
We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.
Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok
2016-11-01
The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities. Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Europe's Neogene and Quaternary lake gastropod diversity - a statistical approach
NASA Astrophysics Data System (ADS)
Neubauer, Thomas A.; Georgopoulou, Elisavet; Harzhauser, Mathias; Mandic, Oleg; Kroh, Andreas
2014-05-01
During the Neogene Europe's geodynamic history gave rise to several long-lived lakes with conspicuous endemic radiations. However, such lacustrine systems are rare today as well as in the past compared to the enormous numbers of "normal" lakes. Most extant European lakes are mainly results of the Ice Ages and are due to their (geologically) temporary nature largely confined to the Pleistocene-Holocene. As glacial lakes are also geographically restricted to glacial regions (and their catchment areas) their preservation potential is fairly low. Also deposits of streams, springs, and groundwater, which today are inhabited by species-rich gastropod assemblages, are rarely preserved. Thus, the pre-Quaternary lacustrine record is biased towards long-lived systems, such as the Late Miocene Lake Pannon, the Early to Middle Miocene Dinaride Lake System, the Middle Miocene Lake Steinheim and several others. All these systems have been studied for more than 150 years concerning their mollusk inventories and the taxonomic literature is formidable. However, apart from few general overviews precise studies on the γ-diversities of the post-Oligocene European lake systems and the shifting biodiversity in European freshwater systems through space and time are entirely missing. Even for the modern faunas, literature on large-scale freshwater gastropod diversity in extant lakes is scarce and lacks a statistical approach. Our preliminary data suggest fundamental differences between modern and pre-Pleistocene freshwater biogeography in central Europe. A rather homogenous central European Pleistocene and Holocene lake fauna is contrasted by considerable provincialism during the early Middle Miocene. Aside from the ancient Dessaretes lakes of the Balkan Peninsula, Holocene lake faunas are dominated by planorbids and lymnaeids in species numbers. This composition differs considerably from many Miocene and Pliocene lake faunas, which comprise pyrgulid-, hydrobiid-, viviparid-, melanopsid
A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ
Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...
A BAYESIAN STATISTICAL APPROACHES FOR THE EVALUATION OF CMAQ
This research focuses on the application of spatial statistical techniques for the evaluation of the Community Multiscale Air Quality (CMAQ) model. The upcoming release version of the CMAQ model was run for the calendar year 2001 and is in the process of being evaluated by EPA an...
Generalized statistical mechanics approaches to earthquakes and tectonics
Papadakis, Giorgos; Michas, Georgios
2016-01-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548
Analysis of Coastal Dunes: A Remote Sensing and Statistical Approach.
ERIC Educational Resources Information Center
Jones, J. Richard
1985-01-01
Remote sensing analysis and statistical methods were used to analyze the coastal dunes of Plum Island, Massachusetts. The research methodology used provides an example of a student project for remote sensing, geomorphology, or spatial analysis courses at the university level. (RM)
Generalized statistical mechanics approaches to earthquakes and tectonics.
Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios
2016-12-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.
An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics
ERIC Educational Resources Information Center
Ellis, Frank B.; Ellis, David C.
2008-01-01
Introductory statistical mechanics is studied for a simple two-state system using an inexpensive and easily built apparatus. A large variety of demonstrations, suitable for students in high school and introductory university chemistry courses, are possible. This article details demonstrations for exothermic and endothermic reactions, the dynamic…
Source apportionment advances using polar plots of bivariate correlation and regression statistics
NASA Astrophysics Data System (ADS)
Grange, Stuart K.; Lewis, Alastair C.; Carslaw, David C.
2016-11-01
This paper outlines the development of enhanced bivariate polar plots that allow the concentrations of two pollutants to be compared using pair-wise statistics for exploring the sources of atmospheric pollutants. The new method combines bivariate polar plots, which provide source characteristic information, with pair-wise statistics that provide information on how two pollutants are related to one another. The pair-wise statistics implemented include weighted Pearson correlation and slope from two linear regression methods. The development uses a Gaussian kernel to locally weight the statistical calculations on a wind speed-direction surface together with variable-scaling. Example applications of the enhanced polar plots are presented by using routine air quality data for two monitoring sites in London, United Kingdom for a single year (2013). The London examples demonstrate that the combination of bivariate polar plots, correlation, and regression techniques can offer considerable insight into air pollution source characteristics, which would be missed if only scatter plots and mean polar plots were used for analysis. Specifically, using correlation and slopes as pair-wise statistics, long-range transport processes were isolated and black carbon (BC) contributions to PM2.5 for a kerbside monitoring location were quantified. Wider applications and future advancements are also discussed.
A higher-order-statistics-based approach to face detection
NASA Astrophysics Data System (ADS)
Li, Chunming; Li, Yushan; Wu, Ruihong; Li, Qiuming; Zhuang, Qingde; Zhang, Zhan
2005-02-01
A face detection method based on higher order statistics is proposed in this paper. Firstly, the object model and noise model are established to extract moving object from the background according to the fact that higher order statistics is nonsense to Gaussian noise. Secondly, the improved Sobel operator is used to extract the edge image of moving object. And a projection function is used to detect the face in the edge image. Lastly, PCA(Principle Component Analysis) method is used to do face recognition. The performance of the system is evaluated on the real video sequences. It is shown that the proposed method is simple and robust to the detection of human faces in the video sequences.
A statistical mechanics approach to autopoietic immune networks
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2010-07-01
In this work we aim to bridge theoretical immunology and disordered statistical mechanics. We introduce a model for the behavior of B-cells which naturally merges the clonal selection theory and the autopoietic network theory as a whole. From the analysis of its features we recover several basic phenomena such as low-dose tolerance, dynamical memory of antigens and self/non-self discrimination.
Statistical Approach to the Operational Testing of Space Fence
2015-07-01
Officer, an Orbital Analyst, and a radar signature analyst at the FPS-85 phased array radar and the FPS-79 dish radar. While at the FPS-79 he...unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Space Fence will be a terrestrial-based radar designed to perform surveillance on earth- orbiting ...ensuring a reasonable test duration. We propose a rigorous statistical test design with candidate on- orbit test targets that span orbital limits defined by
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1].
Students' Attitudes toward Statistics across the Disciplines: A Mixed-Methods Approach
ERIC Educational Resources Information Center
Griffith, James D.; Adams, Lea T.; Gu, Lucy L.; Hart, Christian L.; Nichols-Whitehead, Penney
2012-01-01
Students' attitudes toward statistics were investigated using a mixed-methods approach including a discovery-oriented qualitative methodology among 684 undergraduate students across business, criminal justice, and psychology majors where at least one course in statistics was required. Students were asked about their attitudes toward statistics and…
Statistical Thermodynamic Approach to Vibrational Solitary Waves in Acetanilide
NASA Astrophysics Data System (ADS)
Vasconcellos, Áurea R.; Mesquita, Marcus V.; Luzzi, Roberto
1998-03-01
We analyze the behavior of the macroscopic thermodynamic state of polymers, centering on acetanilide. The nonlinear equations of evolution for the populations and the statistically averaged field amplitudes of CO-stretching modes are derived. The existence of excitations of the solitary wave type is evidenced. The infrared spectrum is calculated and compared with the experimental data of Careri et al. [Phys. Rev. Lett. 51, 104 (1983)], resulting in a good agreement. We also consider the situation of a nonthermally highly excited sample, predicting the occurrence of a large increase in the lifetime of the solitary wave excitation.
A statistical approach to nondestructive testing of laser welds
Duncan, H.A.
1983-07-01
A statistical analysis of the data obtained from a relatively new nondestructive technique for laser welding is presented. The technique is one in which information relating to the quality of the welded joint is extracted from the high intensity plume which is generated from the materials that are welded. The system is such that the detected plume is processed to give a numerical value associated with the material vaporization and consequently, the weld quality. Optimum thresholds for the region in which a weld can be considered as acceptable are determined based on the Neyman-Pearson criterion and Bayes rule.
Demarcating Advanced Learning Approaches from Methodological and Technological Perspectives
ERIC Educational Resources Information Center
Horvath, Imre; Peck, David; Verlinden, Jouke
2009-01-01
In the field of design and engineering education, the fast and expansive evolution of information and communication technologies is steadily converting traditional learning approaches into more advanced ones. Facilitated by Broadband (high bandwidth) personal computers, distance learning has developed into web-hosted electronic learning. The…
Physics-based statistical learning approach to mesoscopic model selection
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab
2015-11-01
In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.
A statistical approach to the temporal development of orbital associations
NASA Astrophysics Data System (ADS)
Kastinen, D.; Kero, J.
2016-01-01
We have performed preliminary studies on the use of a Monte-Carlo based statistical toolbox for small body solar system dynamics to find trends in the temporal development of orbital associations. As a part of this preliminary study four different similarity functions where implemented and applied to the 21P/Giacobini-Zinner meteoroid stream, and resulting simulated meteor showers. The simulations indicate that the temporal behavior of orbital element distributions in the meteoroid stream and the meteor shower differ on century size time scales. The configuration of the meteor shower remains compact for a long time and dissipates an order of magnitude slower than the stream. The main effect driving the shower dissipation is shown to be the addition of new trails to the stream.
Fragmentation and exfoliation of 2-dimensional materials: a statistical approach
NASA Astrophysics Data System (ADS)
Kouroupis-Agalou, Konstantinos; Liscio, Andrea; Treossi, Emanuele; Ortolani, Luca; Morandi, Vittorio; Pugno, Nicola Maria; Palermo, Vincenzo
2014-05-01
The main advantage for applications of graphene and related 2D materials is that they can be produced on large scales by liquid phase exfoliation. The exfoliation process shall be considered as a particular fragmentation process, where the 2D character of the exfoliated objects will influence significantly fragmentation dynamics as compared to standard materials. Here, we used automatized image processing of Atomic Force Microscopy (AFM) data to measure, one by one, the exact shape and size of thousands of nanosheets obtained by exfoliation of an important 2D-material, boron nitride, and used different statistical functions to model the asymmetric distribution of nanosheet sizes typically obtained. Being the resolution of AFM much larger than the average sheet size, analysis could be performed directly at the nanoscale and at the single sheet level. We find that the size distribution of the sheets at a given time follows a log-normal distribution, indicating that the exfoliation process has a ``typical'' scale length that changes with time and that exfoliation proceeds through the formation of a distribution of random cracks that follow Poisson statistics. The validity of this model implies that the size distribution does not depend on the different preparation methods used, but is a common feature in the exfoliation of this material and thus probably for other 2D materials.The main advantage for applications of graphene and related 2D materials is that they can be produced on large scales by liquid phase exfoliation. The exfoliation process shall be considered as a particular fragmentation process, where the 2D character of the exfoliated objects will influence significantly fragmentation dynamics as compared to standard materials. Here, we used automatized image processing of Atomic Force Microscopy (AFM) data to measure, one by one, the exact shape and size of thousands of nanosheets obtained by exfoliation of an important 2D-material, boron nitride, and used
Statistical mechanics approach to lock-key supramolecular chemistry interactions.
Odriozola, Gerardo; Lozada-Cassou, Marcelo
2013-03-08
In the supramolecular chemistry field, intuitive concepts such as molecular complementarity and molecular recognition are used to explain the mechanism of lock-key associations. However, these concepts lack a precise definition, and consequently this mechanism is not well defined and understood. Here we address the physical basis of this mechanism, based on formal statistical mechanics, through Monte Carlo simulation and compare our results with recent experimental data for charged or uncharged lock-key colloids. We find that, given the size range of the molecules involved in these associations, the entropy contribution, driven by the solvent, rules the interaction, over that of the enthalpy. A universal behavior for the uncharged lock-key association is found. Based on our results, we propose a supramolecular chemistry definition.
Advanced Safeguards Approaches for New TRU Fuel Fabrication Facilities
Durst, Philip C.; Ehinger, Michael H.; Boyer, Brian; Therios, Ike; Bean, Robert; Dougan, A.; Tolk, K.
2007-12-15
This second report in a series of three reviews possible safeguards approaches for the new transuranic (TRU) fuel fabrication processes to be deployed at AFCF – specifically, the ceramic TRU (MOX) fuel fabrication line and the metallic (pyroprocessing) line. The most common TRU fuel has been fuel composed of mixed plutonium and uranium dioxide, referred to as “MOX”. However, under the Advanced Fuel Cycle projects custom-made fuels with higher contents of neptunium, americium, and curium may also be produced to evaluate if these “minor actinides” can be effectively burned and transmuted through irradiation in the ABR. A third and final report in this series will evaluate and review the advanced safeguards approach options for the ABR. In reviewing and developing the advanced safeguards approach for the new TRU fuel fabrication processes envisioned for AFCF, the existing international (IAEA) safeguards approach at the Plutonium Fuel Production Facility (PFPF) and the conceptual approach planned for the new J-MOX facility in Japan have been considered as a starting point of reference. The pyro-metallurgical reprocessing and fuel fabrication process at EBR-II near Idaho Falls also provided insight for safeguarding the additional metallic pyroprocessing fuel fabrication line planned for AFCF.
Statistical Approaches to Aerosol Dynamics for Climate Simulation
Zhu, Wei
2014-09-02
In this work, we introduce two general non-parametric regression analysis methods for errors-in-variable (EIV) models: the compound regression, and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer a practical solution when the ratio of the error variances is unknown. Each includes the classic non-parametric regression methods of ordinary least squares, geometric mean regression, and orthogonal regression as special cases. Both methods can be readily generalized to multiple linear regression with two or more random regressors.
Towards an integrated statistical approach to explanetary spectroscopy
NASA Astrophysics Data System (ADS)
Waldmann, Ingo Peter; Morello, Giuseppe; Rocchetto, Marco; Varley, Ryan; Tsiaras, Angelos; Tinetti, Giovanna
2015-08-01
Within merely two decades, observing the atmosphere of extrasolar worlds went from the realms of science fiction to a quotidian reality. The speed of progress is truly staggering.In the early days of atmospheric characterisation, data have often been sparse with low signal-to-noise (S/N) and past analyses were somewhat heuristic. As the field matures with successful space and ground-based instruments producing a steadily increase in data, we must also upgrade our data analysis and interpretation techniques from their “ad-hoc” beginnings to a solid statistical foundation.For low to mid signal to noise (S/N) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. A unified set of tools addressing both points is required.To de-trend low S/N, correlated data, we demonstrated blind-source-separation (BSS) machine learning techniques to be a significant step forward. Both in photometry (Waldmann 2012, Morello 2015, Morello, Waldmann et al. 2014) and spectroscopy (Waldmann 2012, 2014, Waldmann et al. 2013). BSS finds applications in fields as diverse as medical imaging to cosmology. Applied to exoplanets, it allows us to resolve de-trending biases and demonstrate consistency between data sets that were previously found to be highly discrepant and subject to much debate.For the interpretation of the de-trended data, we developed a novel, bayesian atmospheric retrieval suite, Tau-REx (Waldmann et al. 2015a,b, Rocchetto et al. 2015). Tau-REx implements an unbiased prior selections via a custom built pattern recognition software. A full subsequent mapping of the likelihood space (using cluster computing) allows us, for the first time, to fully study degeneracies and biases in emission and transmission spectroscopy.The development of a coherent end-to-end infrastructure is paramount to the characterisation of ever smaller and fainter foreign worlds. In this conference, I
Application of statistical physics approaches to complex organizations
NASA Astrophysics Data System (ADS)
Matia, Kaushik
The first part of this thesis studies two different kinds of financial markets, namely, the stock market and the commodity market. Stock price fluctuations display certain scale-free statistical features that are not unlike those found in strongly-interacting physical systems. The possibility that new insights can be gained using concepts and methods developed to understand scale-free physical phenomena has stimulated considerable research activity in the physics community. In the first part of this thesis a comparative study of stocks and commodities is performed in terms of probability density function and correlations of stock price fluctuations. It is found that the probability density of the stock price fluctuation has a power law functional form with an exponent 3, which is similar across different markets around the world. We present an autoregressive model to explain the origin of the power law functional form of the probability density function of the price fluctuation. The first part also presents the discovery of unique features of the Indian economy, which we find displays a scale-dependent probability density function. In the second part of this thesis we quantify the statistical properties of fluctuations of complex systems like business firms and world scientific publications. We analyze class size of these systems mentioned above where units agglomerate to form classes. We find that the width of the probability density function of growth rate decays with the class size as a power law with an exponent beta which is universal in the sense that beta is independent of the system studied. We also identify two other scaling exponents, gamma connecting the unit size to the class size and gamma connecting the number of units to the class size, where products are units and firms are classes. Finally we propose a generalized preferential attachment model to describe the class size distribution. This model is successful in explaining the growth rate and class
An Enhanced Statistical Approach to Identifying Photorealistic Images
NASA Astrophysics Data System (ADS)
Sutthiwan, Patchara; Ye, Jingyu; Shi, Yun Q.
Computer graphics identification has gained importance in digital era as it relates to image forgery detection and enhancement of high photorealistic rendering software. In this paper, statistical moments of 1-D and 2-D characteristic functions are employed to derive image features that can well capture the statistical differences between computer graphics and photographic images. YCbCr color system is selected because it has shown better performance in computer graphics classification than RGB color system and it has been adopted by the most popularly used JPEG images. Furthermore, only Y and Cb color channels are used in feature extraction due to our study showing features derived from Cb and Cr are so highly correlated that no need to use features extracted from both Cb and Cr components, which substantially reduces computational complexity. Concretely, in each selected color component, features are extracted from each image in both image pixel 2-D array and JPEG 2-D array (an 2-D array consisting of the magnitude of JPEG coefficients), their prediction-error 2-D arrays, and all of their three-level wavelet subbands, referred to as various 2-D arrays generated from a given image in this paper. The rationale behind using prediction-error image is to reduce the influence caused by image content. To generate image features from 1-D characteristic functions, the various 2-D arrays of a given image are the inputs, yielding 156 features in total. For the feature generated from 2-D characteristic functions, only JPEG 2-D array and its prediction-error 2-D array are the inputs, one-unit-apart 2-D histograms of the JPEG 2-D array along the horizontal, vertical and diagonal directions are utilized to generate 2-D characteristic functions, from which the marginal moments are generated to form 234 features. Together, the process then results in 390 features per color channel, and 780 features in total Finally, Boosting Feature Selection (BFS) is used to greatly reduce the
Performance analysis of LVQ algorithms: a statistical physics approach.
Ghosh, Anarta; Biehl, Michael; Hammer, Barbara
2006-01-01
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest prototype classification. However, original LVQ has been introduced based on heuristics and numerous modifications exist to achieve better convergence and stability. Recently, a mathematical foundation by means of a cost function has been proposed which, as a limiting case, yields a learning rule similar to classical LVQ2.1. It also motivates a modification which shows better stability. However, the exact dynamics as well as the generalization ability of many LVQ algorithms have not been thoroughly investigated so far. Using concepts from statistical physics and the theory of on-line learning, we present a mathematical framework to analyse the performance of different LVQ algorithms in a typical scenario in terms of their dynamics, sensitivity to initial conditions, and generalization ability. Significant differences in the algorithmic stability and generalization ability can be found already for slightly different variants of LVQ. We study five LVQ algorithms in detail: Kohonen's original LVQ1, unsupervised vector quantization (VQ), a mixture of VQ and LVQ, LVQ2.1, and a variant of LVQ which is based on a cost function. Surprisingly, basic LVQ1 shows very good performance in terms of stability, asymptotic generalization ability, and robustness to initializations and model parameters which, in many cases, is superior to recent alternative proposals.
Statistical approach to anatomical landmark extraction in AP radiographs
NASA Astrophysics Data System (ADS)
Bernard, Rok; Pernus, Franjo
2001-07-01
A novel method for the automated extraction of important geometrical parameters of the pelvis and hips from APR images is presented. The shape and intensity variations in APR images are encompassed by the statistical shape and appearance models built from a set of training images for each of the three anatomies, i.e., pelvis, right and left hip, separately. The identification of the pelvis and hips is defined as a flexible object recognition problem, which is solved by generating anatomically plausible object instances and matching them to the APR image. The criterion function minimizes the resulting match error and considers the object topology. The obtained flexible object defines the positions of anatomical landmarks, which are further used to calculate the hip joint contact stress. A leave-one-out test was used to evaluate the performance of the proposed method on a set of 26 APR images. The results show the method is able to properly treat image variations and can reliably and accurately identify anatomies in the image and extract the anatomical landmarks needed in the hip joint contact stress calculation.
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
Territorial developments based on graffiti: A statistical mechanics approach
NASA Astrophysics Data System (ADS)
Barbaro, Alethea B. T.; Chayes, Lincoln; D'Orsogna, Maria R.
2013-01-01
We study the well-known sociological phenomenon of gang aggregation and territory formation through an interacting agent system defined on a lattice. We introduce a two-gang Hamiltonian model where agents have red or blue affiliation but are otherwise indistinguishable. In this model, all interactions are indirect and occur only via graffiti markings, on-site as well as on nearest neighbor locations. We also allow for gang proliferation and graffiti suppression. Within the context of this model, we show that gang clustering and territory formation may arise under specific parameter choices and that a phase transition may occur between well-mixed, possibly dilute configurations and well separated, clustered ones. Using methods from statistical mechanics, we study the phase transition between these two qualitatively different scenarios. In the mean-fields rendition of this model, we identify parameter regimes where the transition is first or second order. In all cases, we have found that the transitions are a consequence solely of the gang to graffiti couplings, implying that direct gang to gang interactions are not strictly necessary for gang territory formation; in particular, graffiti may be the sole driving force behind gang clustering. We further discuss possible sociological-as well as ecological-ramifications of our results.
Einstein's Approach to Statistical Mechanics: The 1902-04 Papers
NASA Astrophysics Data System (ADS)
Peliti, Luca; Rechtman, Raúl
2016-09-01
We summarize the papers published by Einstein in the Annalen der Physik in the years 1902-1904 on the derivation of the properties of thermal equilibrium on the basis of the mechanical equations of motion and of the calculus of probabilities. We point out the line of thought that led Einstein to an especially economical foundation of the discipline, and to focus on fluctuations of the energy as a possible tool for establishing the validity of this foundation. We also sketch a comparison of Einstein's approach with that of Gibbs, suggesting that although they obtained similar results, they had different motivations and interpreted them in very different ways.
A Statistical Approach to Provide Individualized Privacy for Surveys.
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures.
A Statistical Approach to Provide Individualized Privacy for Surveys
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M.
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures. PMID:26824758
Statistical physics approaches to quantifying sleep-stage transitions
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan
Sleep can be viewed as a sequence of transitions in a very complex neuronal system. Traditionally, studies of the dynamics of sleep control have focused on the circadian rhythm of sleep-wake transitions or on the ultradian rhythm of the sleep cycle. However, very little is known about the mechanisms responsible for the time structure or even the statistics of the rapid sleep-stage transitions that appear without periodicity. I study the time dynamics of sleep-wake transitions for different species, including humans, rats, and mice, and find that the wake and sleep episodes exhibit completely different behaviors: the durations of wake episodes are characterized by a scale-free power-law distribution, while the durations of sleep episodes have an exponential distribution with a characteristic time scale. The functional forms of the distributions of the sleep and wake durations hold for human subjects of different ages and for subjects with sleep apnea. They also hold for all the species I investigate. Surprisingly, all species have the same power-law exponent for the distribution of wake durations, but the exponential characteristic time of the distribution of sleep durations changes across species. I develop a stochastic model which accurately reproduces our empirical findings. The model suggests that the difference between the dynamics of the sleep and wake states arises from the constraints on the number of microstates in the sleep-wake system. I develop a measure of asymmetry in sleep-stage transitions using a transition probability matrix. I find that both normal and sleep apnea subjects are characterized by two types of asymmetric sleep-stage transition paths, and that the sleep apnea group exhibits less asymmetry in the sleep-stage transitions.
Statistical Approaches for Estimating Actinobacterial Diversity in Marine Sediments
Stach, James E. M.; Maldonado, Luis A.; Masson, Douglas G.; Ward, Alan C.; Goodfellow, Michael; Bull, Alan T.
2003-01-01
Bacterial diversity in a deep-sea sediment was investigated by constructing actinobacterium-specific 16S ribosomal DNA (rDNA) clone libraries from sediment sections taken 5 to 12, 15 to 18, and 43 to 46 cm below the sea floor at a depth of 3,814 m. Clones were placed into operational taxonomic unit (OTU) groups with ≥99% 16S rDNA sequence similarity; the cutoff value for an OTU was derived by comparing 16S rRNA homology with DNA-DNA reassociation values for members of the class Actinobacteria. Diversity statistics were used to determine how the level of dominance, species richness, and genetic diversity varied with sediment depth. The reciprocal of Simpson's index (1/D) indicated that the pattern of diversity shifted toward dominance from uniformity with increasing sediment depth. Nonparametric estimation of the species richness in the 5- to 12-, 15- to 18-, and 43- to 46-cm sediment sections revealed a trend of decreasing species number with depth, 1,406, 308, and 212 OTUs, respectively. Application of the LIBSHUFF program indicated that the 5- to 12-cm clone library was composed of OTUs significantly (P = 0.001) different from those of the 15- to 18- and 43- to 46-cm libraries. FST and phylogenetic grouping of taxa (P tests) were both significant (P < 0.00001 and P < 0.001, respectively), indicating that genetic diversity decreased with sediment depth and that each sediment community harbored unique phylogenetic lineages. It was also shown that even nonconservative OTU definitions result in severe underestimation of species richness; unique phylogenetic clades detected in one OTU group suggest that OTUs do not correspond to real ecological groups sensu Palys (T. Palys, L. K. Nakamura, and F. M. Cohan, Int. J. Syst. Bacteriol. 47:1145-1156, 1997). Mechanisms responsible for diversity and their implications are discussed. PMID:14532080
Jensen-Feynman approach to the statistics of interacting electrons.
Pain, Jean-Christophe; Gilleron, Franck; Faussurier, Gérald
2009-08-01
Faussurier [Phys. Rev. E 65, 016403 (2001)] proposed to use a variational principle relying on Jensen-Feynman (or Gibbs-Bogoliubov) inequality in order to optimize the accounting for two-particle interactions in the calculation of canonical partition functions. It consists of a decomposition into a reference electron system and a first-order correction. The procedure appears to be very efficient in order to evaluate the free energy and the orbital populations. In this work, we present numerical applications of the method and propose to extend it using a reference energy which includes the interaction between two electrons inside a given orbital. This is possible, thanks to our efficient recursion relation for the calculation of partition functions. We also show that a linear reference energy, however, is usually sufficient to achieve a good precision and that the most promising way to improve the approach of Faussurier is to apply Jensen's inequality to a more convenient convex function.
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
Statistical Physics Approaches to Respiratory Dynamics and Lung Structure
NASA Astrophysics Data System (ADS)
Suki, Bela
2004-03-01
The lung consists of a branching airway tree embedded in viscoelastic tissue and provides life-sustaining gas exchange to the body. In diseases, its structure is damaged and its function is compromised. We review two recent works about lung structure and dynamics and how they change in disease. 1) We introduced a new acoustic imaging approach to study airway structure. When airways in a collapsed lung are inflated, they pop open in avalanches. A single opening emits a sound package called crackle consisting of an initial spike (s) followed by ringing. The distribution n(s) of s follows a power law and the exponent of n(s) can be used to calculate the diameter ratio d defined as the ratio of the diameters of an airway to that of its parent averaged over all bifurcations. To test this method, we measured crackles in dogs, rabbits, rats and mice by inflating collapsed isolated lungs with air or helium while recording crackles with a microphone. In each species, n(s) follows a power law with an exponent that depends on species, but not on gas in agreement with theory. Values of d from crackles compare well with those calculated from morphometric data suggesting that this approach is suitable to study airway structure in disease. 2) Using novel experiments and computer models, we studied pulmonary emphysema which is caused by cigarette smoking. In emphysema, the elastic protein fibers of the tissue are actively remodeled by lung cells due to the chemicals present in smoke. We measured the mechanical properties of tissue sheets from normal and emphysematous lungs and imaged its structure which appears as a heterogeneous hexagonal network of fibers. We found evidence that during uniaxial stretching, the collagen and elastin fibers in emphysematous tissue can fail at a critical stress generating holes of various sizes (h). We developed network models of the failure process. When the failure is governed by mechanical forces, the distribution n(h) of h is a power law which
Predicting major element mineral/melt equilibria - A statistical approach
NASA Technical Reports Server (NTRS)
Hostetler, C. J.; Drake, M. J.
1980-01-01
Empirical equations have been developed for calculating the mole fractions of NaO0.5, MgO, AlO1.5, SiO2, KO0.5, CaO, TiO2, and FeO in a solid phase of initially unknown identity given only the composition of the coexisting silicate melt. The approach involves a linear multivariate regression analysis in which solid composition is expressed as a Taylor series expansion of the liquid compositions. An internally consistent precision of approximately 0.94 is obtained, that is, the nature of the liquidus phase in the input data set can be correctly predicted for approximately 94% of the entries. The composition of the liquidus phase may be calculated to better than 5 mol % absolute. An important feature of this 'generalized solid' model is its reversibility; that is, the dependent and independent variables in the linear multivariate regression may be inverted to permit prediction of the composition of a silicate liquid produced by equilibrium partial melting of a polymineralic source assemblage.
Synthetic Lethality as a Targeted Approach to Advanced Prostate Cancer
2013-03-01
target for therapy of prostate cancer , but approaches aimed at Ras itself, or its critical signaling pathways, which are required in normal tissues...Impact: Current therapies for prostate cancer are inadequate, and aberrant activation of Ras or Ras pathways are common. A novel therapeutic modality...to Advanced Prostate Cancer PRINCIPAL INVESTIGATOR: Douglas V. Faller, PhD, MD CONTRACTING ORGANIZATION: Trustees of Boston University
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K
2014-01-01
Body fluids are a common and important type of forensic evidence. In particular, the identification of menstrual blood stains is often a key step during the investigation of rape cases. Here, we report on the application of near-infrared Raman microspectroscopy for differentiating menstrual blood from peripheral blood. We observed that the menstrual and peripheral blood samples have similar but distinct Raman spectra. Advanced statistical analysis of the multiple Raman spectra that were automatically (Raman mapping) acquired from the 40 dried blood stains (20 donors for each group) allowed us to build classification model with maximum (100%) sensitivity and specificity. We also demonstrated that despite certain common constituents, menstrual blood can be readily distinguished from vaginal fluid. All of the classification models were verified using cross-validation methods. The proposed method overcomes the problems associated with currently used biochemical methods, which are destructive, time consuming and expensive.
Advanced statistical process control: controlling sub-0.18-μm lithography and other processes
NASA Astrophysics Data System (ADS)
Zeidler, Amit; Veenstra, Klaas-Jelle; Zavecz, Terrence E.
2001-08-01
access of the analysis to include the external variables involved in CMP, deposition etc. We then applied yield analysis methods to identify the significant lithography-external process variables from the history of lots, subsequently adding the identified process variable to the signatures database and to the PPC calculations. With these improvements, the authors anticipate a 50% improvement of the process window. This improvement results in a significant reduction of rework and improved yield depending on process demands and equipment configuration. A statistical theory that explains the PPC is then presented. This theory can be used to simulate a general PPC application. In conclusion, the PPC concept is not lithography or semiconductors limited. In fact it is applicable for any production process that is signature biased (chemical industry, car industry, .). Requirements for the PPC are large data collection, a controllable process that is not too expensive to tune the process for every lot, and the ability to employ feedback calculations. PPC is a major change in the process management approach and therefor will first be employed where the need is high and the return on investment is very fast. The best industry to start with is the semiconductors and the most likely process area to start with is lithography.
The Development of Official Social Statistics in Italy with a Life Quality Approach
ERIC Educational Resources Information Center
Sabbadini, Linda Laura
2011-01-01
The article covers the main steps of official statistics in the second half of the Nineties through the illustration of the transition from economic oriented official statistics to the quality of life approach. The system of the Multipurpose Surveys introduced in 1993 to give an answer to questions at social level and to provide indicators for…
Monte Carlo Simulations in Statistical Physics -- From Basic Principles to Advanced Applications
NASA Astrophysics Data System (ADS)
Janke, Wolfhard
2013-08-01
This chapter starts with an overview of Monte Carlo computer simulation methodologies which are illustrated for the simple case of the Ising model. After reviewing importance sampling schemes based on Markov chains and standard local update rules (Metropolis, Glauber, heat-bath), nonlocal cluster-update algorithms are explained which drastically reduce the problem of critical slowing down at second-order phase transitions and thus improve the performance of simulations. How this can be quantified is explained in the section on statistical error analyses of simulation data including the effect of temporal correlations and autocorrelation times. Histogram reweighting methods are explained in the next section. Eventually, more advanced generalized ensemble methods (simulated and parallel tempering, multicanonical ensemble, Wang-Landau method) are discussed which are particularly important for simulations of first-order phase transitions and, in general, of systems with rare-event states. The setup of scaling and finite-size scaling analyses is the content of the following section. The chapter concludes with two advanced applications to complex physical systems. The first example deals with a quenched, diluted ferromagnet, and in the second application we consider the adsorption properties of macromolecules such as polymers and proteins to solid substrates. Such systems often require especially tailored algorithms for their efficient and successful simulation.
RAMS approach for reusable launch vehicle advanced studies
NASA Astrophysics Data System (ADS)
Tatry, PH.; Deneu, F.; Simonotti, J. L.
The emerging of reusable single stage to orbit concept as credible launchers in the turn of the century is changing some technical and technological approaches in the way of doing future launcher advanced studies. Among others (such as operations through the "aircraft-like operations" concept), the RAMS approach (reliability, availability, maintainability and safety) has to be implemented from the very beginning of a concept study, especially for the SSTOs ones in order to meet the "able" requirements (affordable, reusable, reliable, available and operable). Beyond the "traditional" considerations applied to expendable launchers and/or man rated space transportation systems, the RAMS involvement in reusable launcher advanced studies and concept trade-offs must allow to perform the best balance between costs, performance and related risks. For instance, in the framework of SSTOs key technologies identification studies performed at Aerospatiale, the RAMS have been involved from the beginning of the preliminary design task. This approach has shown that the assessment of the main propulsion failure risks and associated probabilities of occurrence have strongly affected the vehicle design within the mission management and technical aspects such as main propulsion specifications, ascent trajectory shaping and landing phase scenario (VTOVL configuration). This paper intends to describe this RAMS approach and addresses how it has been applied on trade-off on VTOVL concept.
PREFACE: Advanced many-body and statistical methods in mesoscopic systems
NASA Astrophysics Data System (ADS)
Anghel, Dragos Victor; Sabin Delion, Doru; Sorin Paraoanu, Gheorghe
2012-02-01
It has increasingly been realized in recent times that the borders separating various subfields of physics are largely artificial. This is the case for nanoscale physics, physics of lower-dimensional systems and nuclear physics, where the advanced techniques of many-body theory developed in recent times could provide a unifying framework for these disciplines under the general name of mesoscopic physics. Other fields, such as quantum optics and quantum information, are increasingly using related methods. The 6-day conference 'Advanced many-body and statistical methods in mesoscopic systems' that took place in Constanta, Romania, between 27 June and 2 July 2011 was, we believe, a successful attempt at bridging an impressive list of topical research areas: foundations of quantum physics, equilibrium and non-equilibrium quantum statistics/fractional statistics, quantum transport, phases and phase transitions in mesoscopic systems/superfluidity and superconductivity, quantum electromechanical systems, quantum dissipation, dephasing, noise and decoherence, quantum information, spin systems and their dynamics, fundamental symmetries in mesoscopic systems, phase transitions, exactly solvable methods for mesoscopic systems, various extension of the random phase approximation, open quantum systems, clustering, decay and fission modes and systematic versus random behaviour of nuclear spectra. This event brought together participants from seventeen countries and five continents. Each of the participants brought considerable expertise in his/her field of research and, at the same time, was exposed to the newest results and methods coming from the other, seemingly remote, disciplines. The talks touched on subjects that are at the forefront of topical research areas and we hope that the resulting cross-fertilization of ideas will lead to new, interesting results from which everybody will benefit. We are grateful for the financial and organizational support from IFIN-HH, Ovidius
Comparison of different approaches to evaluation of statistical error of the DSMC method
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2012-11-01
Although the direct simulation Monte Carlo (DSMC) method is widely used for solving the steady problems of the rarefied gas dynamics, the questions of its statistical error evaluation are far from being absolutely clear. Typically, the statistical error in the Monte Carlo method is estimated by the standard deviation determined by the variance of the estimate and the number of its realizations. It is assumed that sampled realizations are independent. In distinction from the classical Monte Carlo method, in the DSMC method the time-averaged estimate is used and the sampled realizations are dependent. Additional difficulties in the evaluation of the statistical error are caused by the complexity of the estimates used in the DSMC method. In the presented work we compare two approaches to evaluating the statistical error. One of them is based on the results of the equilibrium statistical mechanics and the "persistent random walk". Another approach is based on the central limit theorem for Markov processes. Each of these approaches has its own benefits and disadvantages. The first approach mentioned above does not require additional computations to construct estimates of the statistical error. On the other hand it allows evaluating statistical error only in the case when all components of velocity and temperature are equivalent. The second approach to evaluating the statistical error is applicable to simulation by the DSMC method a flows with any degree of nonequilibrium. It allows evaluating the statistical errors of the estimates of velocity and temperature components. The comparison of these approaches was realized on the example of a number of classic problems with different degree of nonequilibrium.
Advanced Stirling Convertor Dynamic Test Approach and Results
NASA Technical Reports Server (NTRS)
Meer, David W.; Hill, Dennis; Ursic, Joseph J.
2010-01-01
The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. This sequence includes testing at workmanship and flight acceptance levels interspersed with periods of extended operation to simulate prefueling and post fueling. The final step in the test sequence utilizes additional testing at flight acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit (ASRG EU) at LM. This paper outlines the overall test approach, summarizes the test results from the ASRG EU, describes the incorporation of those results into the test approach, and presents the results of applying the test approach to the ASC-1 #3 and #4 convertors. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Unal, Cetin; Pasamehmetoglu, Kemal; Carmack, Jon
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Rcactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems is critical. In order to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. The purpose of this paper is to identify the modeling and simulation approach in order to deliver predictive tools for advanced fuels development. The coordination between experimental nuclear fuel design, development technical experts, and computational fuel modeling and simulation technical experts is a critical aspect of the approach and naturally leads to an integrated, goal-oriented science-based R & D approach and strengthens both the experimental and computational efforts. The Advanced Fuels Campaign (AFC) and Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Integrated Performance and Safety Code (IPSC) are working together to determine experimental data and modeling needs. The primary objective of the NEAMS fuels IPSC project is to deliver a coupled, three-dimensional, predictive computational platform for modeling the fabrication and both normal and abnormal operation of nuclear fuel pins and assemblies, applicable to both existing and future reactor fuel designs. The science based program is pursuing the development of an integrated multi-scale and multi-physics modeling and simulation platform for nuclear fuels. This overview paper discusses the vision, goals and approaches how to develop and implement the new approach.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-01-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS) based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically-varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAMP and COSMIC measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction in random errors (standard deviations) of optimized bending angles down to about two-thirds of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; (4) produces realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well characterized and high-quality atmospheric profiles over the entire stratosphere.
Bhhatarai, Barun; Garg, Rajni; Gramatica, Paola
2010-07-12
Two parallel approaches for quantitative structure-activity relationships (QSAR) are predominant in literature, one guided by mechanistic methods (including read-across) and another by the use of statistical methods. To bridge the gap between these two approaches and to verify their main differences, a comparative study of mechanistically relevant and statistically relevant QSAR models, developed on a case study of 158 cycloalkyl-pyranones, biologically active on inhibition (Ki ) of HIV protease, was performed. Firstly, Multiple Linear Regression (MLR) based models were developed starting from a limited amount of molecular descriptors which were widely proven to have mechanistic interpretation. Then robust and predictive MLR models were developed on the same set using two different statistical approaches unbiased of input descriptors. Development of models based on Statistical I method was guided by stepwise addition of descriptors while Genetic Algorithm based selection of descriptors was used for the Statistical II. Internal validation, the standard error of the estimate, and Fisher's significance test were performed for both the statistical models. In addition, external validation was performed for Statistical II model, and Applicability Domain was verified as normally practiced in this approach. The relationships between the activity and the important descriptors selected in all the models were analyzed and compared. It is concluded that, despite the different type and number of input descriptors, and the applied descriptor selection tools or the algorithms used for developing the final model, the mechanistical and statistical approach are comparable to each other in terms of quality and also for mechanistic interpretability of modelling descriptors. Agreement can be observed between these two approaches and the better result could be a consensus prediction from both the models.
Dual-band, infrared buried mine detection using a statistical pattern recognition approach
Buhl, M.R.; Hernandez, J.E.; Clark, G.A.; Sengupta, S.K.
1993-08-01
The main objective of this work was to detect surrogate land mines, which were buried in clay and sand, using dual-band, infrared images. A statistical pattern recognition approach was used to achieve this objective. This approach is discussed and results of applying it to real images are given.
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
Pasqualini, Donatella
2016-05-11
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.
Statistical approaches used to assess and redesign surface water-quality-monitoring networks.
Khalil, B; Ouarda, T B M J
2009-11-01
An up-to-date review of the statistical approaches utilized for the assessment and redesign of surface water quality monitoring (WQM) networks is presented. The main technical aspects of network design are covered in four sections, addressing monitoring objectives, water quality variables, sampling frequency and spatial distribution of sampling locations. This paper discusses various monitoring objectives and related procedures used for the assessment and redesign of long-term surface WQM networks. The appropriateness of each approach for the design, contraction or expansion of monitoring networks is also discussed. For each statistical approach, its advantages and disadvantages are examined from a network design perspective. Possible methods to overcome disadvantages and deficiencies in the statistical approaches that are currently in use are recommended.
Live-site UXO classification studies using advanced EMI and statistical models
NASA Astrophysics Data System (ADS)
Shamatava, I.; Shubitidze, F.; Fernandez, J. P.; Bijamov, A.; Barrowes, B. E.; O'Neill, K.
2011-06-01
In this paper we present the inversion and classification performance of the advanced EMI inversion, processing and discrimination schemes developed by our group when applied to the ESTCP Live-Site UXO Discrimination Study carried out at the former Camp Butner in North Carolina. The advanced models combine: 1) the joint diagonalization (JD) algorithm to estimate the number of potential anomalies from the measured data without inversion, 2) the ortho-normalized volume magnetic source (ONVMS) to represent targets' EMI responses and extract their intrinsic "feature vectors," and 3) the Gaussian mixture algorithm to classify buried objects as targets of interest or not starting from the extracted discrimination features. The studies are conducted using cued datasets collected with the next-generation TEMTADS and MetalMapper (MM) sensor systems. For the cued TEMTADS datasets we first estimate the data quality and the number of targets contributing to each signal using the JD technique. Once we know the number of targets we proceed to invert the data using a standard non-linear optimization technique in order to determine intrinsic parameters such as the total ONVMS for each potential target. Finally we classify the targets using a library-matching technique. The MetalMapper data are all inverted as multi-target scenarios, and the resulting intrinsic parameters are grouped using an unsupervised Gaussian mixture approach. The potential targets of interest are a 37-mm projectile, an M48 fuze, and a 105-mm projectile. During the analysis we requested the ground truth for a few selected anomalies to assist in the classification task. Our results were scored independently by the Institute for Defense Analyses, who revealed that our advanced models produce superb classification when starting from either TEMTADS or MM cued datasets.
NASA Astrophysics Data System (ADS)
Andronov, I. L.; Chinarova, L. L.; Kudashkina, L. S.; Marsakova, V. I.; Tkachenko, M. G.
2016-06-01
We have elaborated a set of new algorithms and programs for advanced time series analysis of (generally) multi-component multi-channel observations with irregularly spaced times of observations, which is a common case for large photometric surveys. Previous self-review on these methods for periodogram, scalegram, wavelet, autocorrelation analysis as well as on "running" or "sub-interval" local approximations were self-reviewed in (2003ASPC..292..391A). For an approximation of the phase light curves of nearly-periodic pulsating stars, we use a Trigonometric Polynomial (TP) fit of the statistically optimal degree and initial period improvement using differential corrections (1994OAP.....7...49A). For the determination of parameters of "characteristic points" (minima, maxima, crossings of some constant value etc.) we use a set of methods self-reviewed in 2005ASPC..335...37A, Results of the analysis of the catalogs compiled using these programs are presented in 2014AASP....4....3A. For more complicated signals, we use "phenomenological approximations" with "special shapes" based on functions defined on sub-intervals rather on the complete interval. E. g. for the Algol-type stars we developed the NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv1212.6707A, 2015JASS...32..127A), which was compared to common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree. The method allows determine the minimal set of parameters required for the "General Catalogue of Variable Stars", as well as an extended set of phenomenological and astrophysical parameters which may be used for the classification. Totally more that 1900 variable stars were studied in our group using these methods in a frame of the "Inter-Longitude Astronomy" campaign (2010OAP....23....8A) and the "Ukrainian Virtual Observatory" project (2012KPCB...28...85V).
The advanced neutron source safety approach and plans
Harrington, R.M. )
1989-01-01
The Advanced Neutron Source (ANS) is a user facility for all areas of neutron research proposed for construction at the Oak Ridge National Laboratory. The neutron source is planned to be a 350-MW research reactor. The reactor, currently in conceptual design, will belong to the United States Department of Energy (USDOE). The safety approach and planned elements of the safety program for the ANS are described. The safety approach is to incorporate USDOE requirements (which, by reference, include appropriate requirements from the United States Nuclear Regulatory Commission (USNRC) and other national and state regulatory agencies) into the design, and to utilize probabilistic risk assessment (PRA) techniques during design to achieve extremely low probability of severe core damage. The PRA has already begun and will continue throughout the design and construction of the reactor. Computer analyses will be conducted for a complete spectrum of accidental events, from anticipated events to very infrequent occurrences. 8 refs., 2 tabs.
On the Geometry of the Berry-Robbins Approach to Spin-Statistics
NASA Astrophysics Data System (ADS)
Papadopoulos, Nikolaos; Reyes-Lega, Andrés F.
2010-07-01
Within a geometric and algebraic framework, the structures which are related to the spin-statistics connection are discussed. A comparison with the Berry-Robbins approach is made. The underlying geometric structure constitutes an additional support for this approach. In our work, a geometric approach to quantum indistinguishability is introduced which allows the treatment of singlevaluedness of wave functions in a global, model independent way.
ERIC Educational Resources Information Center
Potter, James Thomson, III
2012-01-01
Research into teaching practices and strategies has been performed separately in AP Statistics and in K-12 online learning (Garfield, 2002; Ferdig, DiPietro, Black & Dawson, 2009). This study seeks combine the two and build on the need for more investigation into online teaching and learning in specific content (Ferdig et al, 2009; DiPietro,…
Classification of human colonic tissues using FTIR spectra and advanced statistical techniques
NASA Astrophysics Data System (ADS)
Zwielly, A.; Argov, S.; Salman, A.; Bogomolny, E.; Mordechai, S.
2010-04-01
One of the major public health hazards is colon cancer. There is a great necessity to develop new methods for early detection of cancer. If colon cancer is detected and treated early, cure rate of more than 90% can be achieved. In this study we used FTIR microscopy (MSP), which has shown a good potential in the last 20 years in the fields of medical diagnostic and early detection of abnormal tissues. Large database of FTIR microscopic spectra was acquired from 230 human colonic biopsies. Five different subgroups were included in our database, normal and cancer tissues as well as three stages of benign colonic polyps, namely, mild, moderate and severe polyps which are precursors of carcinoma. In this study we applied advanced mathematical and statistical techniques including principal component analysis (PCA) and linear discriminant analysis (LDA), on human colonic FTIR spectra in order to differentiate among the mentioned subgroups' tissues. Good classification accuracy between normal, polyps and cancer groups was achieved with approximately 85% success rate. Our results showed that there is a great potential of developing FTIR-micro spectroscopy as a simple, reagent-free viable tool for early detection of colon cancer in particular the early stages of premalignancy among the benign colonic polyps.
Milward, Elizabeth A; Moscato, Pablo; Riveros, Carlos; Johnstone, Daniel M
2014-01-01
Interventions to delay or slow Alzheimer's disease (AD) progression are most effective when implemented at pre-clinical disease stages, making early diagnosis essential. For this reason, there is an increasing focus on discovery of predictive biomarkers for AD. Currently, the most reliable predictive biomarkers require either expensive (brain imaging) or invasive (cerebrospinal fluid collection) procedures, leading researchers to strive toward identifying robust biomarkers in blood. Yet promising early results from candidate blood biomarker studies are being refuted by subsequent findings in other cohorts or using different assay technologies. Recent evidence suggests that univariate blood biomarkers are not sufficiently sensitive or specific for the diagnosis of disorders as complex, multifactorial, and heterogeneous as AD. To overcome these present limitations, more consideration must be given to the development of 'biomarker panels' assessing multiple molecular entities. The selection of such panels should draw not only on traditional statistical approaches, whether parametric or non-parametric, but also on newer non-statistical approaches that have the capacity to retain and utilize information about all individual study participants rather than collapsing individual data into group summary values (e.g., mean, variance). These new approaches, facilitated by advances in computing, have the potential to preserve the context of interrelationships between different molecular entities, making them amenable to the development of panels that, as a multivariate collective, can overcome the challenge of individual variability and disease heterogeneity to accurately predict and classify AD. We argue that the AD research community should take fuller advantage of these approaches to accelerate discovery.
A statistical-dynamical approach to represent Greenland ocean-ice sheet interactions
NASA Astrophysics Data System (ADS)
Perrette, Mahé; Calov, Reinhard; Ganopolski, Andrey; Robinson, Alex
2013-04-01
An understanding of the dynamics of the Greenland ice sheet is fundamental, because of its potential to contribute strongly to future sea level rise. In recent years there has been a discussion about the role of the ocean in the Greenland ice sheet's present and future mass balance. The ocean interacts with the ice sheet's outlet glaciers via the water circulation in the fjords and considerably affects melting at the termini of the outlet glaciers. Processes related to this interaction are difficult to represent in Greenland-wide ice-sheet models because grid resolution of such models is typically 10 km, whereas large fjords are more commonly only 1 to 5 km wide. Local refinement techniques (e.g. finite elements with adaptive mesh) can be a way of addressing that problem but are still computationally expensive to run. Here we propose a simpler, statistical-dynamical approach suited for large ensemble simulations over 100- to 1000-year integration times, in the EMIC spirit: the fjord-outlet glacier system is restricted to its most fundamental dynamics, controlled by a handful of parameters describing the major characteristics of the system. The model has a generic structure, i.e., it is designed such that it applies to every Greenland outlet glacier. Some of its parameters are fixed by using the (little) available observational data - e.g. for Helheim, Kangerdlugssuaq and Jakobshavn Isbrae - other parameters may vary depending on location. It is not our aim to simulate every single small outlet glacier in its full accuracy; but we aim to represent, on average, important characteristics like ice discharge and general advance/retreat rate on a regional scale over major catchment areas. Aspects of the coupling strategy with the 3D ice-sheet model (SICOPOLIS) are discussed, e.g., critical issues such as the treatment of mass balance. Preliminary design and results will be presented.
ERIC Educational Resources Information Center
McCarthy, Christopher J.; Lambert, Richard G.; Crowe, Elizabeth W.; McCarthy, Colleen J.
2010-01-01
This study examined the relationship of teachers' perceptions of coping resources and demands to job satisfaction factors. Participants were 158 Advanced Placement Statistics high school teachers who completed measures of personal resources for stress prevention, classroom demands and resources, job satisfaction, and intention to leave the field…
"I am Not a Statistic": Identities of African American Males in Advanced Science Courses
NASA Astrophysics Data System (ADS)
Johnson, Diane Wynn
The United States Bureau of Labor Statistics (2010) expects new industries to generate approximately 2.7 million jobs in science and technology by the year 2018, and there is concern as to whether there will be enough trained individuals to fill these positions. A tremendous resource remains untapped, African American students, especially African American males (National Science Foundation, 2009). Historically, African American males have been omitted from the so called science pipeline. Fewer African American males pursue a science discipline due, in part; to limiting factors they experience in school and at home (Ogbu, 2004). This is a case study of African American males who are enrolled in advanced science courses at a predominantly African American (84%) urban high school. Guided by expectancy-value theory (EVT) of achievement related results (Eccles, 2009; Eccles et al., 1983), twelve African American male students in two advanced science courses were observed in their science classrooms weekly, participated in an in-depth interview, developed a presentation to share with students enrolled in a tenth grade science course, responded to an open-ended identity questionnaire, and were surveyed about their perceptions of school. Additionally, the students' teachers were interviewed, and seven of the students' parents. The interview data analyses highlighted the important role of supportive parents (key socializers) who had high expectations for their sons and who pushed them academically. The students clearly attributed their enrollment in advanced science courses to their high regard for their science teachers, which included positive relationships, hands-on learning in class, and an inviting and encouraging learning environment. Additionally, other family members and coaches played important roles in these young men's lives. Students' PowerPoint(c) presentations to younger high school students on why they should take advanced science courses highlighted these
An alternative approach to confidence interval estimation for the win ratio statistic.
Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann
2015-03-01
Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes.
Salman, A; Shufan, E; Zeiri, L; Huleihel, M
2014-07-01
Herpes viruses are involved in a variety of human disorders. Herpes Simplex Virus type 1 (HSV-1) is the most common among the herpes viruses and is primarily involved in human cutaneous disorders. Although the symptoms of infection by this virus are usually minimal, in some cases HSV-1 might cause serious infections in the eyes and the brain leading to blindness and even death. A drug, acyclovir, is available to counter this virus. The drug is most effective when used during the early stages of the infection, which makes early detection and identification of these viral infections highly important for successful treatment. In the present study we evaluated the potential of Raman spectroscopy as a sensitive, rapid, and reliable method for the detection and identification of HSV-1 viral infections in cell cultures. Using Raman spectroscopy followed by advanced statistical methods enabled us, with sensitivity approaching 100%, to differentiate between a control group of Vero cells and another group of Vero cells that had been infected with HSV-1. Cell sites that were "rich in membrane" gave the best results in the differentiation between the two categories. The major changes were observed in the 1195-1726 cm(-1) range of the Raman spectrum. The features in this range are attributed mainly to proteins, lipids, and nucleic acids.
Drugging Chromatin in Cancer: Recent Advances and Novel Approaches
Cai, Sheng F.; Chen, Chun-Wei; Armstrong, Scott A.
2015-01-01
Chromatin regulatory mechanisms play a major role in the control of gene expression programs during normal development and are disrupted in specific disease states, particularly in cancer. Important mediators of chromatin regulatory processes can broadly be classified into writers, erasers, and readers of covalent chromatin modifications that modulate eukaryotic gene transcription and maintain the integrity of the genome. The reversibility and disease-specific nature of these chromatin states make these regulators attractive therapeutic targets. As such, there is an ever-increasing number of candidate therapies aimed at targeting cancer-associated chromatin states that are in various stages of preclinical and clinical development. In this review, we discuss recent advances that have been made in the rational therapeutic targeting of chromatin regulatory mechanisms and highlight certain cancers where there is a specific rationale to assess these therapeutic approaches. PMID:26590715
Evidence-based approaches to other symptoms in advanced cancer.
Dy, Sydney Morss; Apostol, Colleen C
2010-01-01
Dyspnea, nausea and vomiting, anorexia, fatigue, and sleep disturbances are common and distressing in advanced cancer. We updated previous systematic reviews of how these symptoms can be alleviated with targeted literature searches. The approach to these symptoms requires comprehensive symptom assessment; treating underlying causes when benefits exceed risks; prioritizing treatment, as patients usually have many symptoms; and addressing psychosocial and spiritual distress. For dyspnea, evidence supports systemic opioids and nonpharmacological treatments such as a fan. The strongest evidence supports metoclopramide for cancer-related nausea and octreotide for bowel obstruction. For anorexia, enteral or parenteral nutrition is indicated with obstruction and expected prognosis of at least 6 weeks. Evidence supports several drugs for appetite affecting quality of life. For fatigue, evidence supports psychosocial interventions and methylphenidate. For insomnia, evidence supports cognitive-behavioral therapy in cancer; no sleep agents have superior effectiveness.
Reliability Demonstration Approach for Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Ha, CHuong; Zampino, Edward; Penswick, Barry; Spronz, Michael
2010-01-01
Developed for future space missions as a high-efficiency power system, the Advanced Stirling Radioisotope Generator (ASRG) has a design life requirement of 14 yr in space following a potential storage of 3 yr after fueling. In general, the demonstration of long-life dynamic systems remains difficult in part due to the perception that the wearout of moving parts cannot be minimized, and associated failures are unpredictable. This paper shows a combination of systematic analytical methods, extensive experience gained from technology development, and well-planned tests can be used to ensure a high level reliability of ASRG. With this approach, all potential risks from each life phase of the system are evaluated and the mitigation adequately addressed. This paper also provides a summary of important test results obtained to date for ASRG and the planned effort for system-level extended operation.
Analyzing Planck and low redshift data sets with advanced statistical methods
NASA Astrophysics Data System (ADS)
Eifler, Tim
The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi
Statistical approaches to human brain mapping by functional magnetic resonance imaging.
Lange, N
1996-02-28
Proper use of functional neuro-imaging through effective experimental design and modern statistical analysis provides new insights in current brain research. This tutorial has two aims: to describe aspects of this technology to applied statisticians and to provide some statistical ideas to neuroscientists unfamiliar with quantitative analytic methods that accommodate randomness. Introductory background material and ample references to current literature on the physics of magnetic resonance imaging, Fourier methods for image reconstruction and measures of image quality are included. Two of the statistical approaches mentioned here are extensions of established methods for longitudinal data analysis to the frequency domain. A recent case study provides real-world instances of approaches, problems and open questions encountered in current functional neuro-imaging research and an introduction to the analysis of spatial time series in this context.
Investigation of X-Ray Thomson Scattering Using A Statistical Approach
NASA Astrophysics Data System (ADS)
Johnson, Laura
2014-10-01
We present a statistical method of computing x-ray Thomson scattering signals. This model uses average atom wave functions for both bound and continuum electrons, which are computed in a spherically symmetric, self-consistent potential. The wave functions are used to obtain electron distributions for a statistical approach to computing the scattering signals. We compare the differences between using distorted-wave continuum electrons and free-wave electrons in both the statistical approach and the impulse approximation. The results are compared to various experiments including experimental data taken at Cornell's Laboratory of Plasma Studies. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL85000.
Q2-DEPENDENCE of the Statistical Parton Distributions in the Valon Approach
NASA Astrophysics Data System (ADS)
Sohaily, S.; Yazdanpanah, M. M.; Mirjalili, A.
2012-06-01
We employ the statistical approach to obtain the nucleon parton distributions. Statistical distributions are considered as well for partons in the valon model in which a nucleon is assumed to be a state of three valence quark clusters (valon). Analytic expressions of the x-dependent of parton distribution functions (PDFs) in the valon model are obtained statistically in the whole x region [0, 1] in terms of the statistical parameters such as temperature, chemical potential and accessible volume. Since PDFs are obtained by taking the required sum rules including Gottfried sum rule at different energy scales, the Q2-dependence of these parameters can be obtained. Therefore the parton distributions as a function of Q2 will be resulted. To make the calculations more precise, we extend our results to contain three flavors rather than two light u and d quarks.
Thomas, Jeffrey G.; Olson, James M.; Tapscott, Stephen J.; Zhao, Lue Ping
2001-01-01
We have developed a statistical regression modeling approach to discover genes that are differentially expressed between two predefined sample groups in DNA microarray experiments. Our model is based on well-defined assumptions, uses rigorous and well-characterized statistical measures, and accounts for the heterogeneity and genomic complexity of the data. In contrast to cluster analysis, which attempts to define groups of genes and/or samples that share common overall expression profiles, our modeling approach uses known sample group membership to focus on expression profiles of individual genes in a sensitive and robust manner. Further, this approach can be used to test statistical hypotheses about gene expression. To demonstrate this methodology, we compared the expression profiles of 11 acute myeloid leukemia (AML) and 27 acute lymphoblastic leukemia (ALL) samples from a previous study (Golub et al. 1999) and found 141 genes differentially expressed between AML and ALL with a 1% significance at the genomic level. Using this modeling approach to compare different sample groups within the AML samples, we identified a group of genes whose expression profiles correlated with that of thrombopoietin and found that genes whose expression associated with AML treatment outcome lie in recurrent chromosomal locations. Our results are compared with those obtained using t-tests or Wilcoxon rank sum statistics. PMID:11435405
Activating Public Administration Students in a Statistics Course: A Team-Teaching Approach.
ERIC Educational Resources Information Center
Hy, Ronald John; Hughes, Linda
1988-01-01
Describes a team teaching approach to statistics in public administration programs. Discusses a format requiring that students combine skills of critical inquiry and understanding of numerical data with the literacy skills public administrators need in order to communicate with government officials, managers, and staff. Suggests applications for…
How large is the gluon polarization in the statistical parton distributions approach?
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-04-10
We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.
Murari, A; Gelfusa, M; Peluso, E; Gaudio, P; Mazon, D; Hawkes, N; Point, G; Alper, B; Eich, T
2014-12-01
In a Tokamak the configuration of the magnetic fields remains the key element to improve performance and to maximise the scientific exploitation of the device. On the other hand, the quality of the reconstructed fields depends crucially on the measurements available. Traditionally in the least square minimisation phase of the algorithms, used to obtain the magnetic field topology, all the diagnostics are given the same weights, a part from a corrective factor taking into account the error bars. This assumption unduly penalises complex diagnostics, such as polarimetry, which have a limited number of highly significant measurements. A completely new method to choose the weights, to be given to the internal measurements of the magnetic fields for improved equilibrium reconstructions, is presented in this paper. The approach is based on various statistical indicators applied to the residuals, the difference between the actual measurements and their estimates from the reconstructed equilibrium. The potential of the method is exemplified using the measurements of the Faraday rotation derived from JET polarimeter. The results indicate quite clearly that the weights have to be determined carefully, since the inappropriate choice can have significant repercussions on the quality of the magnetic reconstruction both in the edge and in the core. These results confirm the limitations of the assumption that all the diagnostics have to be given the same weight, irrespective of the number of measurements they provide and the region of the plasma they probe.
A Statistical Filtering Approach for Gravity Recovery and Climate Experiment (GRACE) Gravity Data
NASA Technical Reports Server (NTRS)
Davis. J. L.; Tamisiea, M. E.; Elosegui, P.; Mitrovica, J. X.; Hill, E. M.
2008-01-01
We describe and analyze a statistical filtering approach for GRACE data that uses a parametrized model for the temporal evolution of the GRACE coefficients. After least-squares adjustment, a statistical test is performed to assess the significance of the estimated parameters. If the test is passed, the parameters are used by the filter in the reconstruction of the field; otherwise they are rejected. The test is performed, and the filter is formed, separately for annual components of the model and the trend. This new approach is distinct from Gaussian smoothing since it uses the data themselves to test for specific components of the time-varying gravity field. The statistical filter appears inherently to remove most of the "stripes" present in the GRACE fields, although destriping the fields prior to filtering seems to help the trend recovery. We demonstrate that the statistical filter produces reasonable maps for the annual components and trend. We furthermore assess the statistical filter for the annual components using ground-based GPS data in South America by assuming that the annual component of the gravity signal is associated only with groundwater storage. The un-destriped, statistically filtered field has a X2 value relative to the GPS data consistent with the best result from smoothing. In the space domain, the statistical filters are qualitatively similar to Gaussian smoothing. Unlike Gaussian smoothing, however, the statistical filter has significant sidelobes, including large negative sidelobes on the north-south axis, potentially revealing information on the errors, and the correlations among the errors, for the GRACE coefficients.
A Statistical Approach to Identifying Compact Objects in X-ray Binaries
NASA Astrophysics Data System (ADS)
Vrtilek, Saeqa D.
2013-04-01
A standard approach towards statistical inferences in astronomy has been the application of Principal Components Analysis (PCA) to reduce dimensionality. However, for non-linear distributions this is not always an effective approach. A non-linear technique called ``diffusion maps" (Freema \\eta 2009; Richard \\eta 2009; Lee \\& Waterman 2010), a robust eigenmode-based framework, allows retention of the full ``connectivity" of the data points. Through this approach we define the highly non-linear geometry of X-ray binaries in a color-color-intensity diagram in an efficient and statistically sound manner providing a broadly applicable means of distinguishing between black holes and neutron stars in Galactic X-ray binaries.
Schork, Andrew J.; Wang, Yunpeng; Thompson, Wesley K.; Dale, Anders M.; Andreassen, Ole A.
2017-01-01
Schizophrenia is a complex disorder with high heritability. Recent findings from several large genetic studies suggest a large number of risk variants are involved (i.e., schizophrenia is a polygenic disorder) and analytic approaches could be tailored for this scenario. Novel statistical approaches for analyzing GWAS data have recently been developed to be more sensitive to polygenic traits. These approaches have provided intriguing new insights into neurobiological pathways and support for the involvement of regulatory mechanisms, neurotransmission (glutamate, dopamine, GABA), and immune and neurodevelopmental pathways. Integrating the emerging statistical genetics evidence with sound neurobiological experiments will be a critical, and challenging, next step in deciphering the specific disease mechanisms of schizophrenia. PMID:26555806
Run-Length and Edge Statistics Based Approach for Image Splicing Detection
NASA Astrophysics Data System (ADS)
Dong, Jing; Wang, Wei; Tan, Tieniu; Shi, Yun Q.
In this paper, a simple but efficient approach for blind image splicing detection is proposed. Image splicing is a common and fundamental operation used for image forgery. The detection of image splicing is a preliminary but desirable study for image forensics. Passive detection approaches of image splicing are usually regarded as pattern recognition problems based on features which are sensitive to splicing. In the proposed approach, we analyze the discontinuity of image pixel correlation and coherency caused by splicing in terms of image run-length representation and sharp image characteristics. The statistical features extracted from image run-length representation and image edge statistics are used for splicing detection. The support vector machine (SVM) is used as the classifier. Our experimental results demonstrate that the two proposed features outperform existing ones both in detection accuracy and computational complexity.
Comparing geological and statistical approaches for element selection in sediment tracing research
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon
2015-04-01
Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent
Design and contents of an advanced distance-based statistics course for a PhD in nursing program.
Azuero, Andres; Wilbanks, Bryan; Pryor, Erica
2013-01-01
Doctoral nursing students and researchers are expected to understand, critique, and conduct research that uses advanced quantitative methodology. The authors describe the design and contents of a distance-based course in multivariate statistics for PhD students in nursing and health administration, compare the design to recommendations found in the literature for distance-based statistics education, and compare the course contents to a tabulation of the methodologies used in a sample of recently published quantitative dissertations in nursing. The authors conclude with a discussion based on these comparisons as well as with experiences in course implementation and directions for future course development.
A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma
Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele
2016-01-01
Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of
ERIC Educational Resources Information Center
Heaviside, Sheila; And Others
The "Survey of Advanced Telecommunications in U.S. Public Elementary and Secondary Schools, Fall 1996" collected information from 911 regular United States public elementary and secondary schools regarding the availability and use of advanced telecommunications, and in particular, access to the Internet, plans to obtain Internet access, use of…
Organic and inorganic nitrogen dynamics in soil - advanced Ntrace approach
NASA Astrophysics Data System (ADS)
Andresen, Louise C.; Björsne, Anna-Karin; Bodé, Samuel; Klemedtsson, Leif; Boeckx, Pascal; Rütting, Tobias
2016-04-01
Depolymerization of soil organic nitrogen (SON) into monomers (e.g. amino acids) is currently thought to be the rate limiting step for the terrestrial nitrogen (N) cycle. The production of free amino acids (AA) is followed by AA mineralization to ammonium, which is an important fraction of the total N mineralization. Accurate assessment of depolymerization and AA mineralization rate is important for a better understanding of the rate limiting steps. Recent developments in the 15N pool dilution techniques, based on 15N labelling of AA's, allow quantifying gross rates of SON depolymerization and AA mineralization (Wanek et al., 2010; Andersen et al., 2015) in addition to gross N mineralization. However, it is well known that the 15N pool dilution approach has limitations; in particular that gross rates of consumption processes (e.g. AA mineralization) are overestimated. This has consequences for evaluating the rate limiting step of the N cycle, as well as for estimating the nitrogen use efficiency (NUE). Here we present a novel 15N tracing approach, which combines 15N-AA labelling with an advanced version of the 15N tracing model Ntrace (Müller et al., 2007) explicitly accounting for AA turnover in soil. This approach (1) provides a more robust quantification of gross depolymerization and AA mineralization and (2) suggests a more realistic estimate for the microbial NUE of amino acids. Advantages of the new 15N tracing approach will be discussed and further improvements will be identified. References: Andresen, L.C., Bodé, S., Tietema, A., Boeckx, P., and Rütting, T.: Amino acid and N mineralization dynamics in heathland soil after long-term warming and repetitive drought, SOIL, 1, 341-349, 2015. Müller, C., Rütting, T., Kattge, J., Laughlin, R. J., and Stevens, R. J.: Estimation of parameters in complex 15N tracing models via Monte Carlo sampling, Soil Biology & Biochemistry, 39, 715-726, 2007. Wanek, W., Mooshammer, M., Blöchl, A., Hanreich, A., and Richter
NASA Astrophysics Data System (ADS)
Patel, Piyushkumar N.; Quaas, Johannes; Kumar, Raj
2017-03-01
In a previous study of Quaas et al. (2008) the radiative forcing by anthropogenic aerosol due to aerosol-cloud interactions, RFaci, was obtained by a statistical analysis of satellite retrievals using a multilinear regression. Here we employ a new statistical approach to obtain the fitting parameters, determined using a nonlinear least square statistical approach for the relationship between planetary albedo and cloud properties and, further, for the relationship between cloud properties and aerosol optical depth. In order to verify the performance, the results from both statistical approaches (previous and present) were compared to the results from radiative transfer simulations over three regions for different seasons. We find that the results of the new statistical approach agree well with the simulated results both over land and ocean. The new statistical approach increases the correlation by 21-23 % and reduces the error compared to the previous approach.
Advancement in contemporary diagnostic and therapeutic approaches for rheumatoid arthritis.
Kumar, L Dinesh; Karthik, R; Gayathri, N; Sivasudha, T
2016-04-01
This review is intended to provide a summary of the pathogenesis, diagnosis and therapies for rheumatoid arthritis. Rheumatoid arthritis (RA) is a common form of inflammatory autoimmune disease with unknown aetiology. Bone degradation, cartilage and synovial destruction are three major pathways of RA pathology. Sentinel cells includes dendritic cells, macrophages and mast cells bound with the auto antigens and initiate the inflammation of the joints. Those cells further activates the immune cells on synovial membrane by releasing inflammatory cytokines Interleukin 1, 6, 17, etc., Diagnosis of this disease is a combinational approach comprises radiological imaging, blood and serology markers assessment. The treatment of RA still remain inadequate due to the lack of knowledge in disease development. Non-steroidal anti-inflammatory drugs, disease modifying anti rheumatic drugs and corticosteroid are the commercial drugs to reduce pain, swelling and suppressing several disease factors. Arthroscopy will be an useful method while severe degradation of joint tissues. Gene therapy is a major advancement in RA. Suppressor gene locus of inflammatory mediators and matrix degrading enzymes were inserted into the affected area to reduce the disease progression. To overcome the issues aroused from those therapies like side effects and expenses, phytocompounds have been investigated and certain compounds are proved for their anti-arthritic potential. Furthermore certain complementary alternative therapies like yoga, acupuncture, massage therapy and tai chi have also been proved for their capability in RA treatment.
Oxidative stress in aging: advances in proteomic approaches.
Ortuño-Sahagún, Daniel; Pallàs, Mercè; Rojas-Mayorquín, Argelia E
2014-01-01
Aging is a gradual, complex process in which cells, tissues, organs, and the whole organism itself deteriorate in a progressive and irreversible manner that, in the majority of cases, implies pathological conditions that affect the individual's Quality of Life (QOL). Although extensive research efforts in recent years have been made, the anticipation of aging and prophylactic or treatment strategies continue to experience major limitations. In this review, the focus is essentially on the compilation of the advances generated by cellular expression profile analysis through proteomics studies (two-dimensional [2D] electrophoresis and mass spectrometry [MS]), which are currently used as an integral approach to study the aging process. Additionally, the relevance of the oxidative stress factors is discussed. Emphasis is placed on postmitotic tissues, such as neuronal, muscular, and red blood cells, which appear to be those most frequently studied with respect to aging. Additionally, models for the study of aging are discussed in a number of organisms, such as Caenorhabditis elegans, senescence-accelerated probe-8 mice (SAMP8), naked mole-rat (Heterocephalus glaber), and the beagle canine. Proteomic studies in specific tissues and organisms have revealed the extensive involvement of reactive oxygen species (ROS) and oxidative stress in aging.
Oxidative Stress in Aging: Advances in Proteomic Approaches
Ortuño-Sahagún, Daniel; Pallàs, Mercè; Rojas-Mayorquín, Argelia E.
2014-01-01
Aging is a gradual, complex process in which cells, tissues, organs, and the whole organism itself deteriorate in a progressive and irreversible manner that, in the majority of cases, implies pathological conditions that affect the individual's Quality of Life (QOL). Although extensive research efforts in recent years have been made, the anticipation of aging and prophylactic or treatment strategies continue to experience major limitations. In this review, the focus is essentially on the compilation of the advances generated by cellular expression profile analysis through proteomics studies (two-dimensional [2D] electrophoresis and mass spectrometry [MS]), which are currently used as an integral approach to study the aging process. Additionally, the relevance of the oxidative stress factors is discussed. Emphasis is placed on postmitotic tissues, such as neuronal, muscular, and red blood cells, which appear to be those most frequently studied with respect to aging. Additionally, models for the study of aging are discussed in a number of organisms, such as Caenorhabditis elegans, senescence-accelerated probe-8 mice (SAMP8), naked mole-rat (Heterocephalus glaber), and the beagle canine. Proteomic studies in specific tissues and organisms have revealed the extensive involvement of reactive oxygen species (ROS) and oxidative stress in aging. PMID:24688629
Wei, Julong; Xu, Shizhong
2016-02-01
Most standard QTL mapping procedures apply to populations derived from the cross of two parents. QTL detected from such biparental populations are rarely relevant to breeding programs because of the narrow genetic basis: only two alleles are involved per locus. To improve the generality and applicability of mapping results, QTL should be detected using populations initiated from multiple parents, such as the multiparent advanced generation intercross (MAGIC) populations. The greatest challenges of QTL mapping in MAGIC populations come from multiple founder alleles and control of the genetic background information. We developed a random-model methodology by treating the founder effects of each locus as random effects following a normal distribution with a locus-specific variance. We also fit a polygenic effect to the model to control the genetic background. To improve the statistical power for a scanned marker, we release the marker effect absorbed by the polygene back to the model. In contrast to the fixed-model approach, we estimate and test the variance of each locus and scan the entire genome one locus at a time using likelihood-ratio test statistics. Simulation studies showed that this method can increase statistical power and reduce type I error compared with composite interval mapping (CIM) and multiparent whole-genome average interval mapping (MPWGAIM). We demonstrated the method using a public Arabidopsis thaliana MAGIC population and a mouse MAGIC population.
MacKinnon, David P; Pirlott, Angela G
2015-02-01
Statistical mediation methods provide valuable information about underlying mediating psychological processes, but the ability to infer that the mediator variable causes the outcome variable is more complex than widely known. Researchers have recently emphasized how violating assumptions about confounder bias severely limits causal inference of the mediator to dependent variable relation. Our article describes and addresses these limitations by drawing on new statistical developments in causal mediation analysis. We first review the assumptions underlying causal inference and discuss three ways to examine the effects of confounder bias when assumptions are violated. We then describe four approaches to address the influence of confounding variables and enhance causal inference, including comprehensive structural equation models, instrumental variable methods, principal stratification, and inverse probability weighting. Our goal is to further the adoption of statistical methods to enhance causal inference in mediation studies.
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
Statistical Downscaling of Large-Scale Wind Signatures Using a Two-Step Approach
NASA Astrophysics Data System (ADS)
Haas, R.; Born, K.; Georgiadis, A.; Karremann, M. K.; Pinto, J. G.
2012-04-01
Downscaling global scale climate data is an important issue in order to obtain high-resolution data desired for most applications in meteorology and hydrology and to gain a better understanding of local climate variability. Statistical downscaling transforms data from large to local scale by relating punctual climate observations, climate model outputs and high-resolution surface data. In this study, a statistical downscaling approach is used in combination with dynamical downscaling in order to produce gust characteristics of wind storms on a small-scale grid over Europe. The idea is to relate large-scale data, regional climate model (RCM) data and observations by transfer functions, which are calibrated using physically consistent features of the RCM model simulations. In comparison to purely dynamical downscaling by a regional model, such a statistical downscaling approach has several advantages. The computing time is much shorter and, therefore, such an approach can be easily applied on very large numbers of windstorm cases provided e.g. by long-term GCM model simulations, like millennium runs. The first step of the approach constructs a relation between observations and COSMO-CLM signatures with the aim of calibrating the modelled signatures to the observations in terms of model output statistics. For this purpose, parameters of the theoretical Weibull distribution, estimated from the observations at each test site, are interpolated to a 7km RCM grid with Gaussian weights and are compared to Weibull parameters from the COSMO-CLM modelled gust distributions. This allows for an evaluation and correction of gust signatures by quantile mapping. The second step links the RCM wind signatures and large-scale data by a multiple linear regression (MLR) model. One model per grid point is trained using the COSMO-CLM simulated and MOS-corrected gusts for selected wind storm events as predictands, and the according NCEP reanalysis wind speeds of the surrounding NCEP grid
A statistical approach of fatigue crack detection for a structural hotspot
NASA Astrophysics Data System (ADS)
Jin, Pei; Zhou, Li
2012-04-01
This work focuses on an unsupervised, data driven statistical approach to detect and monitor fatigue crack growth in lug joint samples using surface mounted piezoelectric sensors. Early and faithful detection of fatigue cracks in a lug joint can guide in taking preventive measures, thus avoiding any possible fatal structural failure. The on-line damage state at any given fatigue cycle is estimated using a damage index approach as the dynamical properties of a structure change with the initiation of a new crack or the growth of an existing crack. Using the measurements performed on an intact lug joint as baseline, damage indices are evaluated from the frequency response of the lug joint with an unknown damage state. As the damage indices are evaluated, a Bayesian analysis is committed and a statistical metric is evaluated to identify damage state(say crack length).
Windblown sand saltation: A statistical approach to fluid threshold shear velocity
NASA Astrophysics Data System (ADS)
Raffaele, Lorenzo; Bruno, Luca; Pellerey, Franco; Preziosi, Luigi
2016-12-01
The reliable prediction in probabilistic terms of the consequences of aeolian events related to sand transport phenomena is a key element for human activities in arid regions. Threshold shear velocity generating sand lifting is a key component of such a prediction. It suffers from the effect of uncertainties of different origin, such as those related to the physical phenomena, measurement procedures, and modelling. Semi empirical models are often fitted to a small amount of data, while recent probabilistic models needs the probability distribution of several random variables. Triggered by this motivation, this paper proposes a purely statistical approach to fluid threshold shear velocity for sand saltation, treated as a single comprehensive random variable. A data set is derived from previously published studies. Estimates of conditional probability distributions of threshold shear velocity for given grain diameters are given. The obtained statistical moments are critically compared to some deterministic semi empirical models refitted to the same collected data. The proposed statistical approach allows to obtain high order statistics useful for practical purposes.
A Statistical-Physics Approach to Language Acquisition and Language Change
NASA Astrophysics Data System (ADS)
Cassandro, Marzio; Collet, Pierre; Galves, Antonio; Galves, Charlotte
1999-02-01
The aim of this paper is to explain why Statistical Physics can help understanding two related linguistic questions. The first question is how to model first language acquisition by a child. The second question is how language change proceeds in time. Our approach is based on a Gibbsian model for the interface between syntax and prosody. We also present a simulated annealing model of language acquisition, which extends the Triggering Learning Algorithm recently introduced in the linguistic literature.
A Statistical Approach to WindSat Ocean Surface Wind Vector Retrieval
2006-01-01
control number. 1. REPORT DATE JAN 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE A Statistical Approach...the second technique to retrieve wind direction, for which the signal in all the is severely nonlinear (in fact nonunique ) and depends on wind speed...vectors [11]. The regression training dataset is quality controlled through exclusion of every condition listed in the next paragraph and SDRs that are
Carboni, Michele; Gianneo, Andrea; Giglio, Marco
2015-07-01
This research investigates a Lamb-wave based structural health monitoring approach matching an out-of-phase actuation of a pair of piezoceramic transducers at low frequency. The target is a typical quasi-isotropic carbon fibre reinforced polymer aeronautical laminate subjected to artificial, via Teflon patches, and natural, via suitable low velocity drop weight impact tests, delaminations. The performance and main influencing factors of such an approach are studied through a Design of Experiment statistical method, considering both Pulse Echo and Pitch Catch configurations of PZT sensors. Results show that some factors and their interactions can effectively influence the detection of a delamination-like damage.
Korthauer, Keegan D; Chu, Li-Fang; Newton, Michael A; Li, Yuan; Thomson, James; Stewart, Ron; Kendziorski, Christina
2016-10-25
The ability to quantify cellular heterogeneity is a major advantage of single-cell technologies. However, statistical methods often treat cellular heterogeneity as a nuisance. We present a novel method to characterize differences in expression in the presence of distinct expression states within and among biological conditions. We demonstrate that this framework can detect differential expression patterns under a wide range of settings. Compared to existing approaches, this method has higher power to detect subtle differences in gene expression distributions that are more complex than a mean shift, and can characterize those differences. The freely available R package scDD implements the approach.
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials.
NASA Astrophysics Data System (ADS)
Tsutsumi, Morito; Seya, Hajime
2009-12-01
This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Systems Thinking: An Approach for Advancing Workplace Information Literacy
ERIC Educational Resources Information Center
Somerville, Mary M.; Howard, Zaana
2008-01-01
As the importance of information literacy has gained increased recognition, so too have academic library professionals intensified their efforts to champion, activate, and advance these capabilities in others. To date, however, little attention has focused on advancing these essential competencies amongst practitioner advocates. This paper helps…
A New Approach to the Detection and Statistical Classification of Ca2+ Sparks
Bányász, Tamás; Chen-Izu, Ye; Balke, C. W.; Izu, Leighton T.
2007-01-01
The availability of high-speed, two-dimensional (2-D) confocal microscopes and the expanding armamentarium of fluorescent probes presents unprecedented opportunities and new challenges for studying the spatial and temporal dynamics of cellular processes. The need to remove subjectivity from the detection process, the difficulty of the human eye to detect subtle changes in fluorescence in these 2-D images, and the large volume of data produced by these confocal microscopes call for the need to develop algorithms to automatically mark the changes in fluorescence. These fluorescence signal changes are often subtle, so the statistical estimate of the likelihood that the detected signal is not noise is an integral part of the detection algorithm. This statistical estimation is fundamental to our new approach to detection; in earlier Ca2+ spark detectors, this statistical assessment was incidental to detection. Importantly, the use of the statistical properties of the signal local to the spark, instead of over the whole image, reduces the false positive and false negative rates. We developed an automatic spark detection algorithm based on these principles and used it to detect sparks on an inhomogeneous background of transverse tubule-labeled rat ventricular cells. Because of the large region of the cell surveyed by the confocal microscope, we can detect a large enough number of sparks to measure the dynamic changes in spark frequency in individual cells. We also found, in contrast to earlier results, that cardiac sparks are spatially symmetric. This new approach puts the detection of fluorescent signals on a firm statistical foundation. PMID:17400702
A new approach to the detection and statistical classification of Ca2+ sparks.
Bányász, Tamás; Chen-Izu, Ye; Balke, C W; Izu, Leighton T
2007-06-15
The availability of high-speed, two-dimensional (2-D) confocal microscopes and the expanding armamentarium of fluorescent probes presents unprecedented opportunities and new challenges for studying the spatial and temporal dynamics of cellular processes. The need to remove subjectivity from the detection process, the difficulty of the human eye to detect subtle changes in fluorescence in these 2-D images, and the large volume of data produced by these confocal microscopes call for the need to develop algorithms to automatically mark the changes in fluorescence. These fluorescence signal changes are often subtle, so the statistical estimate of the likelihood that the detected signal is not noise is an integral part of the detection algorithm. This statistical estimation is fundamental to our new approach to detection; in earlier Ca(2+) spark detectors, this statistical assessment was incidental to detection. Importantly, the use of the statistical properties of the signal local to the spark, instead of over the whole image, reduces the false positive and false negative rates. We developed an automatic spark detection algorithm based on these principles and used it to detect sparks on an inhomogeneous background of transverse tubule-labeled rat ventricular cells. Because of the large region of the cell surveyed by the confocal microscope, we can detect a large enough number of sparks to measure the dynamic changes in spark frequency in individual cells. We also found, in contrast to earlier results, that cardiac sparks are spatially symmetric. This new approach puts the detection of fluorescent signals on a firm statistical foundation.
Halpin, Peter F; Stam, Henderikus J
2006-01-01
The application of statistical testing in psychological research over the period of 1940-1960 is examined in order to address psychologists' reconciliation of the extant controversy between the Fisher and Neyman-Pearson approaches. Textbooks of psychological statistics and the psychological journal literature are reviewed to examine the presence of what Gigerenzer (1993) called a hybrid model of statistical testing. Such a model is present in the textbooks, although the mathematically incomplete character of this model precludes the appearance of a similarly hybridized approach to statistical testing in the research literature. The implications of this hybrid model for psychological research and the statistical testing controversy are discussed.
Improving statistical keyword detection in short texts: Entropic and clustering approaches
NASA Astrophysics Data System (ADS)
Carretero-Campos, C.; Bernaola-Galván, P.; Coronado, A. V.; Carpena, P.
2013-03-01
In the last years, two successful approaches have been introduced to tackle the problem of statistical keyword detection in a text without the use of external information: (i) The entropic approach, where Shannon’s entropy of information is used to quantify the information content of the sequence of occurrences of each word in the text; and (ii) The clustering approach, which links the heterogeneity of the spatial distribution of a word in the text (clustering) with its relevance. In this paper, first we present some modifications to both techniques which improve their results. Then, we propose new metrics to evaluate the performance of keyword detectors based specifically on the needs of a typical user, and we employ them to find out which approach performs better. Although both approaches work well in long texts, we obtain in general that measures based on word-clustering perform at least as well as the entropic measure, which needs a convenient partition of the text to be applied, such as chapters of a book. In the latter approach we also show that the partition of the text chosen affects strongly its results. Finally, we focus on short texts, a case of high practical importance, such as short reports, web pages, scientific articles, etc. We show that the performance of word-clustering measures is also good in generic short texts since these measures are able to discriminate better the degree of relevance of low frequency words than the entropic approach.
Devarajan, Karthik; Wang, Guoli; Ebrahimi, Nader
2015-04-01
Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H, such that V ∼ W H. It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information retrieval, image processing, speech recognition and computational biology for the analysis and interpretation of large-scale data. There has also been simultaneous development of a related statistical latent class modeling approach, namely, probabilistic latent semantic indexing (PLSI), for analyzing and interpreting co-occurrence count data arising in natural language processing. In this paper, we present a generalized statistical approach to NMF and PLSI based on Renyi's divergence between two non-negative matrices, stemming from the Poisson likelihood. Our approach unifies various competing models and provides a unique theoretical framework for these methods. We propose a unified algorithm for NMF and provide a rigorous proof of monotonicity of multiplicative updates for W and H. In addition, we generalize the relationship between NMF and PLSI within this framework. We demonstrate the applicability and utility of our approach as well as its superior performance relative to existing methods using real-life and simulated document clustering data.
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
Statistical analyses of the magnet data for the advanced photon source storage ring magnets
Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.
1995-05-01
The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180{degrees} and 120{degrees} symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... HUMAN SERVICES Workshop: Advancing Research on Mixtures; New Perspectives and Approaches for Predicting... ``Advancing Research on Mixtures: New Perspectives and Approaches for Predicting Adverse Human Health Effects....niehs.nih.gov/conferences/dert/mixtures/ . The deadline to register for this workshop is...
Robust statistical approaches to assess the degree of agreement of clinical data
NASA Astrophysics Data System (ADS)
Grilo, Luís M.; Grilo, Helena L.
2016-06-01
To analyze the blood of patients who took vitamin B12 for a period of time, two different medicine measurement methods were used (one is the established method, with more human intervention, and the other method uses essentially machines). Given the non-normality of the differences between both measurement methods, the limits of agreement are estimated using also a non-parametric approach to assess the degree of agreement of the clinical data. The bootstrap resampling method is applied in order to obtain robust confidence intervals for mean and median of differences. The approaches used are easy to apply, running a friendly software, and their outputs are also easy to interpret. In this case study the results obtained with (non)parametric approaches lead us to different statistical conclusions, but the decision whether agreement is acceptable or not is always a clinical judgment.
NASA Astrophysics Data System (ADS)
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2016-09-01
Polarimetric radar-based hydrometeor classification is the procedure of identifying different types of hydrometeors by exploiting polarimetric radar observations. The main drawback of the existing supervised classification methods, mostly based on fuzzy logic, is a significant dependency on a presumed electromagnetic behaviour of different hydrometeor types. Namely, the results of the classification largely rely upon the quality of scattering simulations. When it comes to the unsupervised approach, it lacks the constraints related to the hydrometeor microphysics. The idea of the proposed method is to compensate for these drawbacks by combining the two approaches in a way that microphysical hypotheses can, to a degree, adjust the content of the classes obtained statistically from the observations. This is done by means of an iterative approach, performed offline, which, in a statistical framework, examines clustered representative polarimetric observations by comparing them to the presumed polarimetric properties of each hydrometeor class. Aside from comparing, a routine alters the content of clusters by encouraging further statistical clustering in case of non-identification. By merging all identified clusters, the multi-dimensional polarimetric signatures of various hydrometeor types are obtained for each of the studied representative datasets, i.e. for each radar system of interest. These are depicted by sets of centroids which are then employed in operational labelling of different hydrometeors. The method has been applied on three C-band datasets, each acquired by different operational radar from the MeteoSwiss Rad4Alp network, as well as on two X-band datasets acquired by two research mobile radars. The results are discussed through a comparative analysis which includes a corresponding supervised and unsupervised approach, emphasising the operational potential of the proposed method.
NASA Astrophysics Data System (ADS)
Verrelst, J.; Alonso, L.; Camps-Valls, G.; Delegido, J.; Guanter, L.; Contreras Marin, C. J.; Acosta Rubiano, J.; Meza Naranjo, C. M.; Moreno, J.
2010-12-01
This work evaluates two novel approaches, empirical and statistical, for the estimation of leaf chlorophyll (Chlab), leaf area index (LAI) and fractional vegetation cover (fCOVER). The empirical approach calculates the continuum spectral region sensitive to Chlab with the so-called Normalized Area Over reflectance curve (NAOC). The performance of NAOC was compared against the performance of established and generic narrowband vegetation indices. However, as not all available bands take part in these methods it remains ambiguous whether best fittings were achieved. Alternatively, the statistical approach is based on Gaussian processes (GP) and allows inclusion of all bands. GP builds a nonlinear regression as a linear combination of spectra mapped to a high dimensional space. Besides, GP provides an indication of the most contributing bands for each parameter, a weight for the most relevant spectra, and a confidence estimate of the retrieval. Hyperspectral CHRIS data and resampling to Sentinel-2 configuration was conducted in the experiments. Results from the Spanish Barrax test site shows that GP outperforms the empirical approaches in assess- ing the vegetation properties when using at least four out of 62 CHRIS bands. It was found that the most contributing bands were situated in the red and red edge spectra regions, and to a lower extent in the blue and NIR parts of the spectrum. Since the proposed empirical and statistical methods consist of simple relationships between the parameter and a few bands they can be easily applied to multispectral data, as long as the relevant bands are available, as it is the case with Sentinel-2.
Multi-level approach for statistical appearance models with probabilistic correspondences
NASA Astrophysics Data System (ADS)
Krüger, Julia; Ehrhardt, Jan; Handels, Heinz
2016-03-01
Statistical shape and appearance models are often based on the accurate identification of one-to-one correspondences in a training data set. At the same time, the determination of these corresponding landmarks is the most challenging part of such methods. Hufnagel et al.1 developed an alternative method using correspondence probabilities for a statistical shape model. In Krüuger et al.2, 3 we propose the use of probabilistic correspondences for statistical appearance models by incorporating appearance information into the framework. We employ a point-based representation of image data combining position and appearance information. The model is optimized and adapted by a maximum a-posteriori (MAP) approach deriving a single global optimization criterion with respect to model parameters and observation dependent parameters that directly affects shape and appearance information of the considered structures. Because initially unknown correspondence probabilities are used and a higher number of degrees of freedom is introduced to the model a regularization of the model generation process is advantageous. For this purpose we extend the derived global criterion by a regularization term which penalizes implausible topological changes. Furthermore, we propose a multi-level approach for the optimization, to increase the robustness of the model generation process.
Griffith, Lauren E.; van den Heuvel, Edwin; Fortier, Isabel; Sohel, Nazmul; Hofer, Scott M.; Payette, Hélène; Wolfson, Christina; Belleville, Sylvie; Kenny, Meghan; Doiron, Dany; Raina, Parminder
2015-01-01
Objectives To identify statistical methods for harmonization which could be used in the context of summary data and individual participant data meta-analysis of cognitive measures. Study Design and Setting Environmental scan methods were used to conduct two reviews to identify: 1) studies that quantitatively combined data on cognition, and 2) general literature on statistical methods for data harmonization. Search results were rapidly screened to identify articles of relevance. Results All 33 meta-analyses combining cognition measures either restricted their analyses to a subset of studies using a common measure or combined standardized effect sizes across studies; none reported their harmonization steps prior to producing summary effects. In the second scan, three general classes of statistical harmonization models were identified: 1) standardization methods, 2) latent variable models, and 3) multiple imputation models; few publications compared methods. Conclusions Although it is an implicit part of conducting a meta-analysis or pooled analysis, the methods used to assess inferential equivalence of complex constructs are rarely reported or discussed. Progress in this area will be supported by guidelines for the conduct and reporting of the data harmonization and integration and by evaluating and developing statistical approaches to harmonization. PMID:25497980
ERIC Educational Resources Information Center
Touchton, Michael
2015-01-01
I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…
Design of Complex Systems in the presence of Large Uncertainties: a statistical approach
Koutsourelakis, P
2007-07-31
The design or optimization of engineering systems is generally based on several assumptions related to the loading conditions, physical or mechanical properties, environmental effects, initial or boundary conditions etc. The effect of those assumptions to the optimum design or the design finally adopted is generally unknown particularly in large, complex systems. A rational recourse would be to cast the problem in a probabilistic framework which accounts for the various uncertainties but also allows to quantify their effect in the response/behavior/performance of the system. In such a framework the performance function(s) of interest are also random and optimization of the system with respect to the design variables has to be reformulated with respect to statistical properties of these objectives functions (e.g. probability of exceeding certain thresholds). Analysis tools are usually restricted to elaborate legacy codes which have been developed over a long period of time and are generally well-tested (e.g. Finite Elements). These do not however include any stochastic components and their alteration is impossible or ill-advised. Furthermore as the number of uncertainties and design variables grows, the problem quickly becomes computationally intractable. The present paper advocates the use of statistical learning in order to perform these tasks for any system of arbitrary complexity as long as a deterministic solver is available. The proposed computational framework consists of two components. Firstly advanced sampling techniques are employed in order to efficiently explore the dependence of the performance with respect to the uncertain and design variables. The proposed algorithm is directly parallelizable and attempts to maximize the amount of information extracted with the least possible number of calls to the deterministic solver. The output of this process is utilized by statistical classification procedures in order to derive the dependence of the performance
Statistics of Poincaré recurrences in local and global approaches
NASA Astrophysics Data System (ADS)
Anishchenko, Vadim S.; Astakhov, Sergey V.; Boev, Yaroslav I.; Biryukova, Nadezhda I.; Strelkova, Galina I.
2013-12-01
The basic statistical characteristics of the Poincaré recurrence sequence are obtained numerically for the logistic map in the chaotic regime. The mean values, variance and recurrence distribution density are calculated and their dependence on the return region size is analyzed. It is verified that the Afraimovich-Pesin dimension may be evaluated by the Kolmogorov-Sinai entropy. The peculiarities of the influence of noise on the recurrence statistics are studied in local and global approaches. It is shown that the obtained numerical data are in complete agreement with the theoretical results. It is demonstrated that the Poincaré recurrence theory can be applied to diagnose effects of stochastic resonance and chaos synchronization and to calculate the fractal dimension of a chaotic attractor.
A Statistical Approach to Touchdown Dynamics for ExoMars 2016
NASA Astrophysics Data System (ADS)
Jauregui, Y. E.; Prieto, M.; del Campo, F.; Biondetti, G.; Walloschek, T.
2014-06-01
One of the major challenges of any space mission involving planetary Landers is to predict realistic impact conditions, in terms of load and stability. The strong uncertainty inherent to the touchdown event entails the development of accurate predictive methods.The main novelty introduced in this paper is the way this problem is faced for ExoMars 2016. The adopted solution is based on a Statistical Approach in which the relevant outputs are derived statistically from a set of random landing cases.This paper presents both the dynamic analyses and the qualification impact tests that are performed to demonstrate the reliability at touchdown of the ExoMars 2016 Entry, Descent and Landing Demonstrator Module.
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2008-01-01
The author of this paper submits the thesis that learning requires doing; only through inquiry is learning achieved, and hence this paper proposes a programme of use of a modified Moore method in a Probability and Mathematical Statistics (PAMS) course sequence to teach students PAMS. Furthermore, the author of this paper opines that set theory…
Multivariate meta-analysis: a robust approach based on the theory of U-statistic.
Ma, Yan; Mazumdar, Madhu
2011-10-30
Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting.
NASA Astrophysics Data System (ADS)
Maan, Bianca; van der Heijden, Ferdi; Fütterer, Jurgen J.
2012-02-01
Prostate segmentation is essential for calculating prostate volume, creating patient-specific prostate anatomical models and image fusion. Automatic segmentation methods are preferable because manual segmentation is timeconsuming and highly subjective. Most of the currently available segmentation methods use a priori knowledge of the prostate shape. However, there is a large variation in prostate shape between patients. Our approach uses multispectral magnetic resonance imaging (MRI) data, containing T1, T2 and proton density (PD) weighted images and the distance from the voxel to the centroid of the prostate, together with statistical pattern classifiers. We investigated the performance of a parametric and a non-parametric classification approach by applying a Baysian-quadratic and a k-nearest-neighbor classifier respectively. An annotated data set is made by manual labeling of the image. Using this data set, the classifiers are trained and evaluated. sThe following results are obtained after three experiments. Firstly, using feature selection we showed that the average segmentation error rates are lowest when combining all three images and the distance with the k-nearest-neighbor classifier. Secondly, the confusion matrix showed that the k-nearest-neighbor classifier has the sensitivity. Finally, the prostate is segmented using both classifier. The segmentation boundaries approach the prostate boundaries for most slices. However, in some slices the segmentation result contained errors near the borders of the prostate. The current results showed that segmenting the prostate using multispectral MRI data combined with a statistical classifier is a promising method.
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations.
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Theoretical approaches to the steady-state statistical physics of interacting dissipative units
NASA Astrophysics Data System (ADS)
Bertin, Eric
2017-02-01
The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker–Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.
NASA Technical Reports Server (NTRS)
Gao, X. H.; Stanford, J. L.
1988-01-01
The formulas for performing several statistical calculations based on Fourier coefficients are presented for use in atmospheric observational studies. The calculations discussed include a method for estimating the degree of temporal freedoms of two correlated time series and a method for performing seasonal analyses using a half-year summer/winter projection operator in the frequency domain. A modified lag-correlation calculation is proposed for obtaining lag correlations in the frequency domain. Also, a spectral approach for Empirical Orthogonal Function (EOF) and Extended EOF analysis is given which reduces the size of the matrix to be solved in the eigenproblem.
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
Blangero, John; Diego, Vincent P.; Dyer, Thomas D.; Almeida, Marcio; Peralta, Juan; Kent, Jack W.; Williams, Jeff T.; Almasy, Laura; Göring, Harald H. H.
2014-01-01
Statistical genetic analysis of quantitative traits in large pedigrees is a formidable computational task due to the necessity of taking the non-independence among relatives into account. With the growing awareness that rare sequence variants may be important in human quantitative variation, heritability and association study designs involving large pedigrees will increase in frequency due to the greater chance of observing multiple copies of rare variants amongst related individuals. Therefore, it is important to have statistical genetic test procedures that utilize all available information for extracting evidence regarding genetic association. Optimal testing for marker/phenotype association involves the exact calculation of the likelihood ratio statistic which requires the repeated inversion of potentially large matrices. In a whole genome sequence association context, such computation may be prohibitive. Toward this end, we have developed a rapid and efficient eigensimplification of the likelihood that makes analysis of family data commensurate with the analysis of a comparable sample of unrelated individuals. Our theoretical results which are based on a spectral representation of the likelihood yield simple exact expressions for the expected likelihood ratio test statistic (ELRT) for pedigrees of arbitrary size and complexity. For heritability, the ELRT is: −∑ln[1+ĥ2(λgi−1)], where ĥ2 and λgi are respectively the heritability and eigenvalues of the pedigree-derived genetic relationship kernel (GRK). For association analysis of sequence variants, the ELRT is given by ELRT[hq2>0:unrelateds]−(ELRT[ht2>0:pedigrees]−ELRT[hr2>0:pedigrees]), where ht2,hq2, and hr2 are the total, quantitative trait nucleotide, and residual heritabilities, respectively. Using these results, fast and accurate analytical power analyses are possible, eliminating the need for computer simulation. Additional benefits of eigensimplification include a simple method for
Aarabi, Mohammad Hadi; Kamalian, Aida; Mohajer, Bahram; Shandiz, Mahdi Shirin; Eqlimi, Ehsan; Shojaei, Ahmad; Safabakhsh, Hamidreza
2015-08-01
Parkinson's Disease (PD) is a progressive neurodegenerative disorder assumed to involve different areas of CNS and PNS. Thus, Diffusion Tensor Imaging (DTI) is used to examine the areas engaged in PD neurodegeneration. In the present study, we computed average tract length and fiber volume as a measure of white matter integrity and adopted Network Based Statistics (NBS) to conduct group analyses between age- and gender-matched PD patients and healthy control connectivity matrices. NBS is a powerful statistical tool that utilizes the presence of every link in connectivity matrices and controls family wise error rates (in weak sense). The major regions with significantly reduced interconnecting fiber volume or average tract length were cingulum, temporal lobe, frontal lobe, parahippocampus, hippocampus, olfactory lobe, and occipital lobe.
Papaneophytou, Christos P; Kontopidis, George
2014-02-01
The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth.
Advance directives in psychiatric care: a narrative approach
Widdershoven, G.; Berghmans, R.
2001-01-01
Advance directives for psychiatric care are the subject of debate in a number of Western societies. By using psychiatric advance directives (or so-called "Ulysses contracts"), it would be possible for mentally ill persons who are competent and with their disease in remission, and who want timely intervention in case of future mental crisis, to give prior authorisation to treatment at a later time when they are incompetent, have become non-compliant, and are refusing care. Thus the devastating consequences of recurrent psychosis could be minimised. Ulysses contracts raise a number of ethical questions. In this article the central issues of concern and debate are discussed from a narrative perspective. Ulysses contracts are viewed as elements of an ongoing narrative in which patient and doctor try to make sense of and get a hold on the recurrent crises inherent in the patient's psychiatric condition. Key Words: Medical ethics • narrative ethics • advance directives • psychiatry PMID:11314165
Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H.M.
2012-01-01
Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett–Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium. PMID:24031928
A statistical approach to describe highly excited heavy and superheavy nuclei
NASA Astrophysics Data System (ADS)
Chen, Peng-Hui; Feng, Zhao-Qing; Li, Jun-Qing; Zhang, Hong-Fei
2016-09-01
A statistical approach based on the Weisskopf evaporation theory has been developed to describe the de-excitation process of highly excited heavy and superheavy nuclei, in particular for the proton-rich nuclei. The excited nucleus is cooled by evaporating γ-rays, light particles (neutrons, protons, α etc) in competition with binary fission, in which the structure effects (shell correction, fission barrier, particle separation energy) contribute to the processes. The formation of residual nuclei is evaluated via sequential emission of possible particles above the separation energies. The available data of fusion-evaporation excitation functions in the 28Si+198Pt reaction can be reproduced nicely within the approach. Supported by Major State Basic Research Development Program in China (2015CB856903), National Natural Science Foundation of China Projects (11175218, U1332207, 11475050, 11175074), and Youth Innovation Promotion Association of Chinese Academy of Sciences
Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H M
2012-07-01
Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett-Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium.
New advances in methodology for statistical tests useful in geostatistical studies
Borgman, L.E.
1988-05-01
Methodology for statistical procedures to perform tests of hypothesis pertaining to various aspects of geostatistical investigations has been slow in developing. The correlated nature of the data precludes most classical tests and makes the design of new tests difficult. Recent studies have led to modifications of the classical t test which allow for the intercorrelation. In addition, results for certain nonparametric tests have been obtained. The conclusions of these studies provide a variety of new tools for the geostatistician in deciding questions on significant differences and magnitudes.
Taylor, Kyla W.; Joubert, Bonnie R.; Braun, Joe M.; Dilworth, Caroline; Gennings, Chris; Hauser, Russ; Heindel, Jerry J.; Rider, Cynthia V.; Webster, Thomas F.; Carlin, Danielle J.
2016-01-01
Summary: Quantifying the impact of exposure to environmental chemical mixtures is important for identifying risk factors for diseases and developing more targeted public health interventions. The National Institute of Environmental Health Sciences (NIEHS) held a workshop in July 2015 to address the need to develop novel statistical approaches for multi-pollutant epidemiology studies. The primary objective of the workshop was to identify and compare different statistical approaches and methods for analyzing complex chemical mixtures data in both simulated and real-world data sets. At the workshop, participants compared approaches and results and speculated as to why they may have differed. Several themes emerged: a) no one statistical approach appeared to outperform the others, b) many methods included some form of variable reduction or summation of the data before statistical analysis, c) the statistical approach should be selected based upon a specific hypothesis or scientific question, and d) related mixtures data should be shared among researchers to more comprehensively and accurately address methodological questions and statistical approaches. Future efforts should continue to design and optimize statistical approaches to address questions about chemical mixtures in epidemiological studies. PMID:27905274
A statistical approach to evaluate flood risk at the regional level: an application to Italy
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate
NASA Astrophysics Data System (ADS)
Gangopadhyay, S.; Clark, M. P.; Rajagopalan, B.
2002-12-01
The success of short term (days to fortnight) streamflow forecasting largely depends on the skill of surface climate (e.g., precipitation and temperature) forecasts at local scales in the individual river basins. The surface climate forecasts are used to drive the hydrologic models for streamflow forecasting. Typically, Medium Range Forecast (MRF) models provide forecasts of large scale circulation variables (e.g. pressures, wind speed, relative humidity etc.) at different levels in the atmosphere on a regular grid - which are then used to "downscale" to the surface climate at locations within the model grid box. Several statistical and dynamical methods are available for downscaling. This paper compares the utility of two statistical downscaling methodologies: (1) multiple linear regression (MLR) and (2) a nonparametric approach based on k-nearest neighbor (k-NN) bootstrap method, in providing local-scale information of precipitation and temperature at a network of stations in the Upper Colorado River Basin. Downscaling to the stations is based on output of large scale circulation variables (i.e. predictors) from the NCEP Medium Range Forecast (MRF) database. Fourteen-day six hourly forecasts are developed using these two approaches, and their forecast skill evaluated. A stepwise regression is performed at each location to select the predictors for the MLR. The k-NN bootstrap technique resamples historical data based on their "nearness" to the current pattern in the predictor space. Prior to resampling a Principal Component Analysis (PCA) is performed on the predictor set to identify a small subset of predictors. Preliminary results using the MLR technique indicate a significant value in the downscaled MRF output in predicting runoff in the Upper Colorado Basin. It is expected that the k-NN approach will match the skill of the MLR approach at individual stations, and will have the added advantage of preserving the spatial co-variability between stations, capturing
Thomson, Ron I; Nearey, Terrance M; Derwing, Tracey M
2009-09-01
This study describes a statistical approach to measuring crosslinguistic vowel similarity and assesses its efficacy in predicting L2 learner behavior. In the first experiment, using linear discriminant analysis, relevant acoustic variables from vowel productions of L1 Mandarin and L1 English speakers were used to train a statistical pattern recognition model that simultaneously comprised both Mandarin and English vowel categories. The resulting model was then used to determine what categories novel Mandarin and English vowel productions most resembled. The extent to which novel cases were classified as members of a competing language category provided a means for assessing the crosslinguistic similarity of Mandarin and English vowels. In a second experiment, L2 English learners imitated English vowels produced by a native speaker of English. The statistically defined similarity between Mandarin and English vowels quite accurately predicted L2 learner behavior; the English vowel elicitation stimuli deemed most similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as a Mandarin category; English stimuli that were less similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as new or emerging categories.
Targeted Approach to Overcoming Treatment Resistance in Advanced Prostate Cancer
2013-07-01
therapy -‐resistant prostate cancer cells and in combination therapy (SOW...treatment resistance in advanced prostate cancer PRINCIPAL INVESTIGATOR: Dr. Karin Scarpinato CONTRACTING ORGANIZATION: Georgia Southern...SUPPLEMENTARY NOTES 14. ABSTRACT The purpose of this project is to determine if rescinnamine is effective against prostate cancer and
Measuring Alumna Career Advancement: An Approach Based on Educational Expectations.
ERIC Educational Resources Information Center
Ben-Ur, Tamar; Rogers, Glen
Alverno College (Wisconsin), a women's liberal arts college, has developed an Alumni Career Level Classification (AACLC) scheme to measure alumna career advancement and demonstrate institutional accountability. This validation study was part of a larger longitudinal study of two entire cohorts of students entering the college in 1976 and 1977, of…
Advances in statistical methods to map quantitative trait loci in outbred populations.
Hoeschele, I; Uimari, P; Grignola, F E; Zhang, Q; Gage, K M
1997-11-01
Statistical methods to map quantitative trait loci (QTL) in outbred populations are reviewed, extensions and applications to human and plant genetic data are indicated, and areas for further research are identified. Simple and computationally inexpensive methods include (multiple) linear regression of phenotype on marker genotypes and regression of squared phenotypic differences among relative pairs on estimated proportions of identity-by-descent at a locus. These methods are less suited for genetic parameter estimation in outbred populations but allow the determination of test statistic distributions via simulation or data permutation; however, further inferences including confidence intervals of QTL location require the use of Monte Carlo or bootstrap sampling techniques. A method which is intermediate in computational requirements is residual maximum likelihood (REML) with a covariance matrix of random QTL effects conditional on information from multiple linked markers. Testing for the number of QTLs on a chromosome is difficult in a classical framework. The computationally most demanding methods are maximum likelihood and Bayesian analysis, which take account of the distribution of multilocus marker-QTL genotypes on a pedigree and permit investigators to fit different models of variation at the QTL. The Bayesian analysis includes the number of QTLs on a chromosome as an unknown.
Advances in Statistical Methods to Map Quantitative Trait Loci in Outbred Populations
Hoeschele, I.; Uimari, P.; Grignola, F. E.; Zhang, Q.; Gage, K. M.
1997-01-01
Statistical methods to map quantitative trait loci (QTL) in outbred populations are reviewed, extensions and applications to human and plant genetic data are indicated, and areas for further research are identified. Simple and computationally inexpensive methods include (multiple) linear regression of phenotype on marker genotypes and regression of squared phenotypic differences among relative pairs on estimated proportions of identity-by-descent at a locus. These methods are less suited for genetic parameter estimation in outbred populations but allow the determination of test statistic distributions via simulation or data permutation; however, further inferences including confidence intervals of QTL location require the use of Monte Carlo or bootstrap sampling techniques. A method which is intermediate in computational requirements is residual maximum likelihood (REML) with a covariance matrix of random QTL effects conditional on information from multiple linked markers. Testing for the number of QTLs on a chromosome is difficult in a classical framework. The computationally most demanding methods are maximum likelihood and Bayesian analysis, which take account of the distribution of multilocus marker-QTL genotypes on a pedigree and permit investigators to fit different models of variation at the QTL. The Bayesian analysis includes the number of QTLs on a chromosome as an unknown. PMID:9383084
NASA Astrophysics Data System (ADS)
Baran, Sándor; Möller, Annette
2017-02-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
Multivariate statistical and GIS-based approach to identify heavy metal sources in soils.
Facchinelli, A; Sacchi, E; Mallen, L
2001-01-01
The knowledge of the regional variability, the background values and the anthropic vs. natural origin for potentially harmful elements in soils is of critical importance to assess human impact and to fix guide values and quality standards. The present study was undertaken as a preliminary survey on soil contamination on a regional scale in Piemonte (NW Italy). The aims of the study were: (1) to determine average regional concentrations of some heavy metals (Cr, Co, Ni, Cu, Zn, Pb); (2) to find out their large-scale variability; (3) to define their natural or artificial origin; and (4) to identify possible non-point sources of contamination. Multivariate statistic approaches (Principal Component Analysis and Cluster Analysis) were adopted for data treatment, allowing the identification of three main factors controlling the heavy metal variability in cultivated soils. Geostatistics were used to construct regional distribution maps, to be compared with the geographical, geologic and land use regional database using GIS software. This approach, evidencing spatial relationships, proved very useful to the confirmation and refinement of geochemical interpretations of the statistical output. Cr, Co and Ni were associated with and controlled by parent rocks, whereas Cu together with Zn, and Pb alone were controlled by anthropic activities. The study indicates that background values and realistic mandatory guidelines are impossible to fix without an extensive data collection and without a correct geochemical interpretation of the data.
NASA Astrophysics Data System (ADS)
Baran, Sándor; Möller, Annette
2016-06-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Busquets, Anthony M.
2000-01-01
A simulation experiment was performed to assess situation awareness (SA) and workload of pilots while monitoring simulated autoland operations in Instrument Meteorological Conditions with three advanced display concepts: two enhanced electronic flight information system (EFIS)-type display concepts and one totally synthetic, integrated pictorial display concept. Each concept incorporated sensor-derived wireframe runway and iconic depictions of sensor-detected traffic in different locations on the display media. Various scenarios, involving conflicting traffic situation assessments, main display failures, and navigation/autopilot system errors, were used to assess the pilots' SA and workload during autoland approaches with the display concepts. From the results, for each scenario, the integrated pictorial display concept provided the pilots with statistically equivalent or substantially improved SA over the other display concepts. In addition to increased SA, subjective rankings indicated that the pictorial concept offered reductions in overall pilot workload (in both mean ranking and spread) over the two enhanced EFIS-type display concepts. Out of the display concepts flown, the pilots ranked the pictorial concept as the display that was easiest to use to maintain situational awareness, to monitor an autoland approach, to interpret information from the runway and obstacle detecting sensor systems, and to make the decision to go around.
A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M.; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
NASA Astrophysics Data System (ADS)
Broothaerts, Nils; Verstraeten, Gert
2016-04-01
Reconstructing and quantifying human impact is an important step to understand human-environment interactions in the past. To fully understand the role of human impact in altering the environment during the Holocene, detailed reconstructions of the vegetation changes and quantitative measures of human impact on the landscape are needed. Statistical analysis of pollen data has recently been used to characterize vegetation changes and to extract semi-quantitative data on human impact. In this study, multivariate statistical analysis (cluster analysis and non-metric multidimensional scaling (NMDS)) of pollen data was used to reconstruct human induced land use changes in two contrasting environments: central Belgium and SW Turkey. For each region, pollen data from different study sites were integrated. The data from central Belgium shows the gradually increasing human impact from the Bronze Age onwards (ca. 3900 cal a BP), except for a temporary halt between 1900-1600 cal a BP, coupled with the Migration Period in Europe. Statistical analysis of pollen data from SW Turkey provides new integrated information on changing human impact through time in the Sagalassos territory, and shows that human impact was most intense during the Hellenistic and Roman Period (ca. 2200-1750 cal a BP) and decreased and changed in nature afterwards. In addition, regional vegetation estimates using the REVEALS model were made for each study site and were compared with the outcome of the statistical analysis of the pollen data. It shows that for some cases the statistical approach can be a more easily applicable alternative for the REVEALS model. Overall, the presented examples from two contrasting environments shows that cluster analysis and NMDS are useful tools to provide semi-quantitative insights in the temporal and spatial vegetation changes related to increasing human impact. Moreover, the technique can be used to compare and integrate pollen datasets from different study sites within
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.
Comparing emerging and mature markets during times of crises: A non-extensive statistical approach
NASA Astrophysics Data System (ADS)
Namaki, A.; Koohi Lai, Z.; Jafari, G. R.; Raei, R.; Tehrani, R.
2013-07-01
One of the important issues in finance and economics for both scholars and practitioners is to describe the behavior of markets, especially during times of crises. In this paper, we analyze the behavior of some mature and emerging markets with a Tsallis entropy framework that is a non-extensive statistical approach based on non-linear dynamics. During the past decade, this technique has been successfully applied to a considerable number of complex systems such as stock markets in order to describe the non-Gaussian behavior of these systems. In this approach, there is a parameter q, which is a measure of deviation from Gaussianity, that has proved to be a good index for detecting crises. We investigate the behavior of this parameter in different time scales for the market indices. It could be seen that the specified pattern for q differs for mature markets with regard to emerging markets. The findings show the robustness of the stated approach in order to follow the market conditions over time. It is obvious that, in times of crises, q is much greater than in other times. In addition, the response of emerging markets to global events is delayed compared to that of mature markets, and tends to a Gaussian profile on increasing the scale. This approach could be very useful in application to risk and portfolio management in order to detect crises by following the parameter q in different time scales.
NASA Astrophysics Data System (ADS)
Colin, T. A.
1995-07-01
This paper reviews advances in methods for estimating fluvial transport of suspended sediment and nutrients. Research from the past four years, mostly dealing with estimating monthly and annual loads, is emphasized. However, because this topic has not appeared in previous IUGG reports, some research prior to 1990 is included. The motivation for studying sediment transport has shifted during the past few decades. In addition to its role in filling reservoirs and channels, sediment is increasingly recognized as an important part of fluvial ecosystems and estuarine wetlands. Many groups want information about sediment transport [Bollman, 1992]: Scientists trying to understand benthic biology and catchment hydrology; citizens and policy-makers concerned about environmental impacts (e.g. impacts of logging [Beschta, 1978] or snow-fences [Sturges, 1992]); government regulators considering the effectiveness of programs to protect in-stream habitat and downstream waterbodies; and resource managers seeking to restore wetlands.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-08-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall
2016-01-01
Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.
The adjoint neutron transport equation and the statistical approach for its solution
NASA Astrophysics Data System (ADS)
Saracco, P.; Dulla, S.; Ravetto, P.
2016-11-01
The adjoint equation was introduced in the early days of neutron transport and its solution, the neutron importance, has been used for several applications in neutronics. The work presents at first a critical review of the adjoint neutron transport equation. Afterwards, the adjont model is constructed for a reference physical situation, for which an analytical approach is viable, i.e. an infinite homogeneous scattering medium. This problem leads to an equation that is the adjoint of the slowing-down equation, which is well known in nuclear reactor physics. A general closed-form analytical solution to such adjoint equation is obtained by a procedure that can be used also to derive the classical Placzek functions. This solution constitutes a benchmark for any statistical or numerical approach to the adjoint equation. A sampling technique to evaluate the adjoint flux for the transport equation is then proposed and physically interpreted as a transport model for pseudo-particles. This can be done by introducing appropriate kernels describing the transfer of the pseudo-particles in the phase space. This technique allows estimating the importance function by a standard Monte Carlo approach. The sampling scheme is validated by comparison with the analytical results previously obtained.
NASA Astrophysics Data System (ADS)
Stück, H. L.; Siegesmund, S.
2012-04-01
Sandstones are a popular natural stone due to their wide occurrence and availability. The different applications for these stones have led to an increase in demand. From the viewpoint of conservation and the natural stone industry, an understanding of the material behaviour of this construction material is very important. Sandstones are a highly heterogeneous material. Based on statistical analyses with a sufficiently large dataset, a systematic approach to predicting the material behaviour should be possible. Since the literature already contains a large volume of data concerning the petrographical and petrophysical properties of sandstones, a large dataset could be compiled for the statistical analyses. The aim of this study is to develop constraints on the material behaviour and especially on the weathering behaviour of sandstones. Approximately 300 samples from historical and presently mined natural sandstones in Germany and ones described worldwide were included in the statistical approach. The mineralogical composition and fabric characteristics were determined from detailed thin section analyses and descriptions in the literature. Particular attention was paid to evaluating the compositional and textural maturity, grain contact respectively contact thickness, type of cement, degree of alteration and the intergranular volume. Statistical methods were used to test for normal distributions and calculating the linear regression of the basic petrophysical properties of density, porosity, water uptake as well as the strength. The sandstones were classified into three different pore size distributions and evaluated with the other petrophysical properties. Weathering behavior like hygric swelling and salt loading tests were also included. To identify similarities between individual sandstones or to define groups of specific sandstone types, principle component analysis, cluster analysis and factor analysis were applied. Our results show that composition and porosity
Advanced Placement Chemistry: A Feasible, Alternative Approach to Advanced Placement Chemistry Labs.
ERIC Educational Resources Information Center
Bergmeier, Brian D.
1984-01-01
Discusses two factors to consider when initiating advanced placement (AP) chemistry, namely, the quality of those teaching the program and the time necessary to teach the relevant concepts. Suggests offering laboratory sessions during evening hours as an alternative to traditional daytime arrangements for laboratory blocks. (JN)
McManamay, Ryan A
2014-01-01
Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Vanclooster, Marnik
2013-04-01
Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.
Abut, Fatih; Akay, Mehmet Fatih
2015-01-01
Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance. PMID:26346869
Abut, Fatih; Akay, Mehmet Fatih
2015-01-01
Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.
Molecular Engineering of Vector-Based Oncolytic and Imaging Approaches for Advanced Prostate Cancer
2006-02-01
Oncolytic and Imaging Approaches for Advanced Prostate Cancer PRINCIPAL INVESTIGATOR: Lily Wu, M.D., Ph.D. CONTRACTING ORGANIZATION...SUBTITLE Molecular Engineering of Vector-based Oncolytic and Imaging Approaches for 5a. CONTRACT NUMBER Advanced Prostate Cancer 5b. GRANT...reproductions will be in black and white. 14. ABSTRACT Hormone refractory and metastatic prostate cancer are not well understood. Better animal models
NASA Astrophysics Data System (ADS)
Combes, Frédéric; Trescher, Maximilian; Piéchon, Frédéric; Fuchs, Jean-Noël
2016-10-01
We develop a theory for the analytic computation of the free energy of band insulators in the presence of a uniform and constant electric field. The two key ingredients are a perturbation-like expression of the Wannier-Stark energy spectrum of electrons and a modified statistical mechanics approach involving a local chemical potential in order to deal with the unbounded spectrum and impose the physically relevant electronic filling. At first order in the field, we recover the result of King-Smith, Vanderbilt, and Resta for the electric polarization in terms of a Zak phase—albeit at finite temperature—and, at second order, deduce a general formula for the electric susceptibility, or equivalently for the dielectric constant. Advantages of our method are the validity of the formalism both at zero and finite temperature and the easy computation of higher order derivatives of the free energy. We verify our findings on two different one-dimensional tight-binding models.
Rapp, J.B.
1991-01-01
Q-mode factor analysis was used to quantitate the distribution of the major aliphatic hydrocarbon (n-alkanes, pristane, phytane) systems in sediments from a variety of marine environments. The compositions of the pure end members of the systems were obtained from factor scores and the distribution of the systems within each sample was obtained from factor loadings. All the data, from the diverse environments sampled (estuarine (San Francisco Bay), fresh-water (San Francisco Peninsula), polar-marine (Antarctica) and geothermal-marine (Gorda Ridge) sediments), were reduced to three major systems: a terrestrial system (mostly high molecular weight aliphatics with odd-numbered-carbon predominance), a mature system (mostly low molecular weight aliphatics without predominance) and a system containing mostly high molecular weight aliphatics with even-numbered-carbon predominance. With this statistical approach, it is possible to assign the percentage contribution from various sources to the observed distribution of aliphatic hydrocarbons in each sediment sample. ?? 1991.
NASA Astrophysics Data System (ADS)
Vathsala, H.; Koolagudi, Shashidhar G.
2017-01-01
In this paper we discuss a data mining application for predicting peninsular Indian summer monsoon rainfall, and propose an algorithm that combine data mining and statistical techniques. We select likely predictors based on association rules that have the highest confidence levels. We then cluster the selected predictors to reduce their dimensions and use cluster membership values for classification. We derive the predictors from local conditions in southern India, including mean sea level pressure, wind speed, and maximum and minimum temperatures. The global condition variables include southern oscillation and Indian Ocean dipole conditions. The algorithm predicts rainfall in five categories: Flood, Excess, Normal, Deficit and Drought. We use closed itemset mining, cluster membership calculations and a multilayer perceptron function in the algorithm to predict monsoon rainfall in peninsular India. Using Indian Institute of Tropical Meteorology data, we found the prediction accuracy of our proposed approach to be exceptionally good.
The late-type stellar density profile in the Galactic Center: A statistical approach
NASA Astrophysics Data System (ADS)
Chappell, S. N.; Ghez, A. M.; Do, T.; Martinez, G. D.; Yelda, S.; Sitarski, B. N.; Lu, J. R.; Morris, M. R.
2017-01-01
The late-type stellar population in the Galactic Center was first predicted to reside in a dynamically relaxed cusp (power law slope ranging from 3/2 to 7/4). However, other works - which rely on models to correct for projection effects - have suggested a flat distribution instead. The need for this correction is due to the lack of information regarding the line-of-sight distances. With a two decade long baseline in astrometric measurements, we are now able to measure significant projected radial accelerations, six of which are newly reported here, which directly constrain line-of-sight distances. Here we present a statistical approach to take advantage of this information and more accurately constrain the shape of the radial density profile of the late-type stellar population in the Galactic Center.
NASA Astrophysics Data System (ADS)
Durán-Lobato, Matilde; Enguix-González, Alicia; Fernández-Arévalo, Mercedes; Martín-Banderas, Lucía
2013-02-01
Lipid nanoparticles (LNPs) are a promising carrier for all administration routes due to their safety, small size, and high loading of lipophilic compounds. Among the LNP production techniques, the easy scale-up, lack of organic solvents, and short production times of the high-pressure homogenization technique (HPH) make this method stand out. In this study, a statistical analysis was applied to the production of LNP by HPH. Spherical LNPs with mean size ranging from 65 nm to 11.623 μm, negative zeta potential under -30 mV, and smooth surface were produced. Manageable equations based on commonly used parameters in the pharmaceutical field were obtained. The lipid to emulsifier ratio ( R L/S) was proved to statistically explain the influence of oil phase and surfactant concentration on final nanoparticles size. Besides, the homogenization pressure was found to ultimately determine LNP size for a given R L/S, while the number of passes applied mainly determined polydispersion. α-Tocopherol was used as a model drug to illustrate release properties of LNP as a function of particle size, which was optimized by the regression models. This study is intended as a first step to optimize production conditions prior to LNP production at both laboratory and industrial scale from an eminently practical approach, based on parameters extensively used in formulation.
Kumar, Ramya; Lahann, Joerg
2016-07-06
The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %.
Chemical entity recognition in patents by combining dictionary-based and statistical approaches
Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
NASA Astrophysics Data System (ADS)
Anagnostopoulou, Christina
2016-04-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
A risk-based approach to management of leachables utilizing statistical analysis of extractables.
Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M
2015-04-01
To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.
Generalized Deam-Edwards approach to the statistical mechanics of randomly crosslinked systems
NASA Astrophysics Data System (ADS)
Xing, Xiangjun; Lu, Bing-Sui; Ye, Fangfu; Goldbart, Paul M.
2013-08-01
We address the statistical mechanics of randomly and permanently crosslinked networks. We develop a theoretical framework (vulcanization theory) which can be used to systematically analyze the correlation between the statistical properties of random networks and their histories of formation. Generalizing the original idea of Deam and Edwards, we consider an instantaneous crosslinking process, where all crosslinkers (modeled as Gaussian springs) are introduced randomly at once in an equilibrium liquid state, referred to as the preparation state. The probability that two functional sites are crosslinked by a spring exponentially decreases with their distance squared. After formally averaging over network connectivity, we obtained an effective theory with all degrees of freedom replicated 1 + n times. Two thermodynamic ensembles, the preparation ensemble and the measurement ensemble, naturally appear in this theory. The former describes the thermodynamic fluctuations in the state of preparation, while the latter describes the thermodynamic fluctuations in the state of measurement. We classify various correlation functions and discuss their physical significances. In particular, the memory correlation functions characterize how the properties of networks depend on their method of preparation, and are the hallmark properties of all randomly crosslinked materials. We clarify the essential difference between our approach and that of Deam-Edwards, and discuss the saddle-point order parameters and its physical significance. Finally we also discuss the connection between saddle-point approximation of vulcanization theory, and the classical theory of rubber elasticity as well as the neo-classical theory of nematic elastomers.
Advanced Stirling Convertor Dynamic Test Approach and Results
NASA Technical Reports Server (NTRS)
Meer, David W.; Hill, Dennis; Ursic, Joseph
2009-01-01
The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Converters (ASC) at NASA John H. Glenn Research Center undergo a vibration test sequence intended to simulate the vibration history of an ASC used in an ASRG for a space mission. This sequence includes testing at Workmanship and Flight Acceptance levels interspersed with periods of extended operation to simulate pre and post fueling. The final step in the test sequence utilizes additional testing at Flight Acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit ( ASRG-EU) at Lockheed Martin. This paper presents the vibration test plan for current and future ASC units, including the modified input spectra, and the results of recent tests using these spectra. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.
Strategists and Non-Strategists in Austrian Enterprises—Statistical Approaches
NASA Astrophysics Data System (ADS)
Duller, Christine
2011-09-01
The purpose of this work is to determine with a modern statistical approach which variables can indicate whether an arbitrary enterprise uses strategic management as basic business concept. "Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly (i.e. regularly) to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment or a new social, financial or political environment." [12] In Austria 70% to 80% of all enterprises can be classified as family firms. In literature the empirically untested hypothesis can be found that family firms tend to have less formalised management accounting systems than non-family enterprises. But it is unknown whether the use of strategic management accounting systems is influenced more by the fact of structure (family or non-family enterprise) or by the effect of size (number of employees). Therefore, the goal is to split up enterprises into two subgroups, namely strategists and non-strategists and to get information on the variables of influence (size, structure, branches, etc.). Two statistical approaches are used: On the one hand a classical cluster analysis is implemented to design two subgroups and on the other hand a latent class model is built up for this problem. After a description of the theoretical background first results of both strategies are compared.
NASA Astrophysics Data System (ADS)
Donner, Reik; Passow, Christian
2016-04-01
The appropriate statistical evaluation of recent changes in the occurrence of hydro-meteorological extreme events is of key importance for identifying trends in the behavior of climate extremes and associated impacts on ecosystems or technological infrastructures, as well as for validating the capability of models used for future climate scenarios to correctly represent such trends in the past decades. In this context, most recent studies have utilized conceptual approaches from extreme value theory based on parametric descriptions of the probability distribution functions of extremes. However, the application of such methods is faced with a few fundamental challenges: (1) The application of the most widely used approaches of generalized extreme value (GEV) or generalized Pareto (GP) distributions is based on assumptions the validity of which can often be hardly proven. (2) Due to the differentiation between extreme and normal values (peaks-over-threshold, block maxima), much information on the distribution of the variable of interest is not used at all by such methods, implying that the sample size of values effectively used for estimating the parameters of the GEV or GP distributions is largely limited for typical lengths of observational series. (3) The problem of parameter estimation is further enhanced by the variety of possibly statistical models mapping different aspects of temporal changes of extremes like seasonality or possibly non-linear trends. Reliably identifying the most appropriate model is a challenging task for the lengths of typical observational series. As an alternative to approaches based on extreme value theory, there have been a few attempts to transfer quantile regression approaches to statistically describing the time-dependence of climate extremes. In this context, a value exceeding a certain instantaneous percentile of the time-dependent probability distribution function of the data under study is considered to be an extreme event. In
Augustine, Swinburne A J; Simmons, Kaneatra J; Eason, Tarsha N; Griffin, Shannon M; Curioso, Clarissa L; Wymer, Larry J; Fout, G Shay; Grimm, Ann C; Oshima, Kevin H; Dufour, Al
2015-10-01
There are numerous pathogens that can be transmitted through water. Identifying and understanding the routes and magnitude of exposure or infection to these microbial contaminants are critical to assessing and mitigating risk. Conventional approaches of studying immunological responses to exposure or infection such as Enzyme-Linked Immunosorbent Assays (ELISAs) and other monoplex antibody-based immunoassays can be very costly, laborious, and consume large quantities of patient sample. A major limitation of these approaches is that they can only be used to measure one analyte at a time. Multiplex immunoassays provide the ability to study multiple pathogens simultaneously in microliter volumes of samples. However, there are several challenges that must be addressed when developing these multiplex immunoassays such as selection of specific antigens and antibodies, cross-reactivity, calibration, protein-reagent interferences, and the need for rigorous optimization of protein concentrations. In this study, a Design of Experiments (DOE) approach was used to optimize reagent concentrations for coupling selected antigens to Luminex™ xMAP microspheres for use in an indirect capture, multiplex immunoassay to detect human exposure or infection from pathogens that are potentially transmitted through water. Results from Helicobacter pylori, Campylobacter jejuni, Escherichia coli O157:H7, and Salmonella typhimurium singleplexes were used to determine the mean concentrations that would be applied to the multiplex assay. Cut-offs to differentiate between exposed and non-exposed individuals were determined using finite mixed modeling (FMM). The statistical approaches developed facilitated the detection of Immunoglobulin G (IgG) antibodies to H. pylori, C. jejuni, Toxoplasma gondii, hepatitis A virus, rotavirus and noroviruses (VA387 and Norwalk strains) in fifty-four diagnostically characterized plasma samples. Of the characterized samples, the detection rate was 87.5% for H
NASA Astrophysics Data System (ADS)
Brauchler, R.; Cheng, J.; Dietrich, P.; Everett, M.; Johnson, B.; Sauter, M.
2005-12-01
Knowledge about the spatial variations in hydraulic properties plays an important role controlling solute movement in saturated flow systems. Traditional hydrogeological approaches appear to have difficulties providing high resolution parameter estimates. Thus, we have decided to develop an approach coupling the two existing hydraulic tomographic approaches: a) Inversion of the drawdown as a function of time (amplitude inversion) and b) the inversion of travel times of the pressure disturbance. The advantages of hydraulic travel time tomography are its high structural resolution and computational efficiency. However, travel times are primarily controlled by the aquifer diffusivity making it difficult to determine hydraulically conductivity and storage. Amplitude inversion on the other hand is able to determine hydraulic conductivity and storage separately, but the heavy computational burden of the amplitude inversion is often a shortcoming, especially for larger data sets. Our coupled inversion approach was developed and tested using synthetic data sets. The data base of the inversion comprises simulated slug tests, in which the position of the sources (injection ports) isolated with packers, are varied between the tests. The first step was the inversion of several characteristic travel times (e.g. early, intermediate and late travel times) in order to determine the diffusivity distribution. Secondly, the resulting diffusivity distributions were classified into homogeneous groups in order to differentiate between hydrogeological units characterized by a significant diffusivity contrast. The classification was performed by using multivariate statistics. With a numerical flow model and an automatic parameter estimator the amplitude inversion was performed in a final step. The classified diffusivity distribution is an excellent starting model for the amplitude inversion and allows to reduce strongly the calculation time. The final amplitude inversion overcomes
Advanced numerical methods and software approaches for semiconductor device simulation
CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.
2000-03-23
In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.
Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation
Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.
2000-01-01
In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less
Recent advances in bioprinting techniques: approaches, applications and future prospects.
Li, Jipeng; Chen, Mingjiao; Fan, Xianqun; Zhou, Huifang
2016-09-20
Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a "bioink" to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions.
NASA Astrophysics Data System (ADS)
Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel
In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.
Feron, Gilles; Ayed, Charfedinne; Qannari, El Mostafa; Courcoux, Philippe; Laboure, Hélène; Guichard, Elisabeth
2014-01-01
For human beings, the mouth is the first organ to perceive food and the different signalling events associated to food breakdown. These events are very complex and as such, their description necessitates combining different data sets. This study proposed an integrated approach to understand the relative contribution of main food oral processing events involved in aroma release during cheese consumption. In vivo aroma release was monitored on forty eight subjects who were asked to eat four different model cheeses varying in fat content and firmness and flavoured with ethyl propanoate and nonan-2-one. A multiblock partial least square regression was performed to explain aroma release from the different physiological data sets (masticatory behaviour, bolus rheology, saliva composition and flux, mouth coating and bolus moistening). This statistical approach was relevant to point out that aroma release was mostly explained by masticatory behaviour whatever the cheese and the aroma, with a specific influence of mean amplitude on aroma release after swallowing. Aroma release from the firmer cheeses was explained mainly by bolus rheology. The persistence of hydrophobic compounds in the breath was mainly explained by bolus spreadability, in close relation with bolus moistening. Resting saliva poorly contributed to the analysis whereas the composition of stimulated saliva was negatively correlated with aroma release and mostly for soft cheeses, when significant.
A U-Statistic-based random Forest approach for genetic association study.
Li, Ming; Peng, Ruo-Sin; Wei, Changshuai; Lu, Qing
2012-06-01
Variations in complex traits are influenced by multiple genetic variants, environmental risk factors, and their interactions. Though substantial progress has been made in identifying single genetic variants associated with complex traits, detecting the gene-gene and gene-environment interactions remains a great challenge. When a large number of genetic variants and environmental risk factors are involved, searching for interactions is limited to pair-wise interactions due to the exponentially increased feature space and computational intensity. Alternatively, recursive partitioning approaches, such as random forests, have gained popularity in high-dimensional genetic association studies. In this article, we propose a U-Statistic-based random forest approach, referred to as Forest U-Test, for genetic association studies with quantitative traits. Through simulation studies, we showed that the Forest U-Test outperformed exiting methods. The proposed method was also applied to study Cannabis Dependence (CD), using three independent datasets from the Study of Addiction: Genetics and Environment. A significant joint association was detected with an empirical p-value less than 0.001. The finding was also replicated in two independent datasets with p-values of 5.93e-19 and 4.70e-17, respectively.
Advanced Lung Cancer Screening: An Individualized Molecular Nanotechnology Approach
2016-03-01
improved panel of genes hypermethylated in lung cancer, with extraordinarily high specificity and sensitivity, we combined the improved methods of MOB ...final results using this approach is provided in figure 3. Figure 2. Overview of the Methylation- on-Beads ( MOB ) Process. Circulating DNA from up...magnetic decantation, and removal of supernatant. Figure 3 ß-Actin Ct values for MOB processed vs. Phenol Chloroform extracted and traditionally
Synthetic Lethality as a Targeted Approach to Advanced Prostate Cancer
2014-05-01
the secondary alcohol furnishes ketone 6. Coupling of the Molander-type 3-5 trifluoroborate 7 (the “staurosporine” component) provided KAM1 in 65...improve their solubility and metabolic stability (Fig. 6) using the synthetic approaches noted in Scheme 2. We will start by simply adding polar groups...given in 100% DMSO, and precipitates whenever any aqueous solvent is added, including ethanol . The drug appeared to precipitate when given i.p
NASA Technical Reports Server (NTRS)
Yeh, Leehwa
1993-01-01
The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite-mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena.
Advancing Partnerships Towards an Integrated Approach to Oil Spill Response
NASA Astrophysics Data System (ADS)
Green, D. S.; Stough, T.; Gallegos, S. C.; Leifer, I.; Murray, J. J.; Streett, D.
2015-12-01
Oil spills can cause enormous ecological and economic devastation, necessitating application of the best science and technology available, and remote sensing is playing a growing critical role in the detection and monitoring of oil spills, as well as facilitating validation of remote sensing oil spill products. The FOSTERRS (Federal Oil Science Team for Emergency Response Remote Sensing) interagency working group seeks to ensure that during an oil spill, remote sensing assets (satellite/aircraft/instruments) and analysis techniques are quickly, effectively, appropriately, and seamlessly available to oil spills responders. Yet significant challenges remain for addressing oils spanning a vast range of chemical properties that may be spilled from the Tropics to the Arctic, with algorithms and scientific understanding needing advances to keep up with technology. Thus, FOSTERRS promotes enabling scientific discovery to ensure robust utilization of available technology as well as identifying technologies moving up the TRL (Technology Readiness Level). A recent FOSTERRS facilitated support activity involved deployment of the AVIRIS NG (Airborne Visual Infrared Imaging Spectrometer- Next Generation) during the Santa Barbara Oil Spill to validate the potential of airborne hyperspectral imaging to real-time map beach tar coverage including surface validation data. Many developing airborne technologies have potential to transition to space-based platforms providing global readiness.
Advanced Modular Power Approach to Affordable, Supportable Space Systems
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Kimnach, Greg L.; Fincannon, James; Mckissock,, Barbara I.; Loyselle, Patricia L.; Wong, Edmond
2013-01-01
Recent studies of missions to the Moon, Mars and Near Earth Asteroids (NEA) indicate that these missions often involve several distinct separately launched vehicles that must ultimately be integrated together in-flight and operate as one unit. Therefore, it is important to see these vehicles as elements of a larger segmented spacecraft rather than separate spacecraft flying in formation. The evolution of large multi-vehicle exploration architecture creates the need (and opportunity) to establish a global power architecture that is common across all vehicles. The Advanced Exploration Systems (AES) Modular Power System (AMPS) project managed by NASA Glenn Research Center (GRC) is aimed at establishing the modular power system architecture that will enable power systems to be built from a common set of modular building blocks. The project is developing, demonstrating and evaluating key modular power technologies that are expected to minimize non-recurring development costs, reduce recurring integration costs, as well as, mission operational and support costs. Further, modular power is expected to enhance mission flexibility, vehicle reliability, scalability and overall mission supportability. The AMPS project not only supports multi-vehicle architectures but should enable multi-mission capability as well. The AMPS technology development involves near term demonstrations involving developmental prototype vehicles and field demonstrations. These operational demonstrations not only serve as a means of evaluating modular technology but also provide feedback to developers that assure that they progress toward truly flexible and operationally supportable modular power architecture.
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
Additive Biomanufacturing: An Advanced Approach for Periodontal Tissue Regeneration.
Carter, Sarah-Sophia D; Costa, Pedro F; Vaquette, Cedryck; Ivanovski, Saso; Hutmacher, Dietmar W; Malda, Jos
2017-01-01
Periodontitis is defined as a chronic inflammatory condition, characterized by destruction of the periodontium, composed of hard (i.e. alveolar bone and cementum) and soft tissues (i.e. gingiva and periodontal ligament) surrounding and supporting the teeth. In severe cases, reduced periodontal support can lead to tooth loss, which requires tissue augmentation or procedures that initiate a repair, yet ideally a regenerative response. However, mimicking the three-dimensional complexity and functional integration of the different tissue components via scaffold- and/or matrix-based guided tissue engineering represents a great challenge. Additive biomanufacturing, a manufacturing method in which objects are designed and fabricated in a layer-by-layer manner, has allowed a paradigm shift in the current manufacturing of medical devices and implants. This shift from design-to-manufacture to manufacture-to-design, seen from a translational research point of view, provides the biomedical engineering and periodontology communities a technology with the potential to achieve tissue regeneration instead of repair. In this review, the focus is put on additively biomanufactured scaffolds for periodontal applications. Besides a general overview of the concept of additive biomanufacturing within this field, different developed scaffold designs are described. To conclude, future directions regarding advanced biomaterials and additive biomanufacturing technologies for applications in regenerative periodontology are highlighted.
Engaging Patients with Advance Directives using an Information Visualization Approach
Woollen, Janet; Bakken, Suzanne
2016-01-01
Despite the benefits of advance directives (AD) to both patients and care providers, they are often not completed due to lack of patient awareness. The purpose of this paper is to advocate for creation and use of an innovative information visualization (infovisual) as a health communication tool aimed at improving AD dissemination and engagement. The infovisual would promote AD awareness by engaging patients to learn about their options and inspire contemplation and conversation regarding patients’ end-of-life (EOL) journey. An infovisual may be able to communicate insights that are often communicated in words, but are much more powerfully communicated by example. Furthermore, an infovisual could facilitate vivid understanding of options and inspire the beginning of often-difficult conversations between care providers, patients and loved ones. It may also save clinicians’ time, as care providers may be able to spend less time explaining details of EOL care options. Use of an infovisual could assist in ensuring a well-planned EOL. PMID:26273950
Locally advanced rectal cancer: the importance of a multidisciplinary approach.
Berardi, Rossana; Maccaroni, Elena; Onofri, Azzurra; Morgese, Francesca; Torniai, Mariangela; Tiberi, Michela; Ferrini, Consuelo; Cascinu, Stefano
2014-12-14
Rectal cancer accounts for a relevant part of colorectal cancer cases, with a mortality of 4-10/100000 per year. The development of locoregional recurrences and the occurrence of distant metastases both influences the prognosis of these patients. In the last two decades, new multimodality strategies have improved the prognosis of locally advanced rectal cancer with a significant reduction of local relapse and an increase in terms of overall survival. Radical surgery still remains the principal curative treatment and the introduction of total mesorectal excision has significantly achieved a reduction in terms of local recurrence rates. The employment of neoadjuvant treatment, delivered before surgery, also achieved an improved local control and an increased sphincter preservation rate in low-lying tumors, with an acceptable acute and late toxicity. This review describes the multidisciplinary management of rectal cancer, focusing on the effectiveness of neoadjuvant chemoradiotherapy and of post-operative adjuvant chemotherapy both in the standard combined modality treatment programs and in the ongoing research to improve these regimens.
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.; Bruckner-Lea, Cindy J.; Marks, James D.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
Feyissa, Daniel D.; Aher, Yogesh D.; Engidawork, Ephrem; Höger, Harald; Lubec, Gert; Korz, Volker
2017-01-01
Animal models for anxiety, depressive-like and cognitive diseases or aging often involve testing of subjects in behavioral test batteries. The large number of test variables with different mean variations and within and between test correlations often constitute a significant problem in determining essential variables to assess behavioral patterns and their variation in individual animals as well as appropriate statistical treatment. Therefore, we applied a multivariate approach (principal component analysis) to analyse the behavioral data of 162 male adult Sprague-Dawley rats that underwent a behavioral test battery including commonly used tests for spatial learning and memory (holeboard) and different behavioral patterns (open field, elevated plus maze, forced swim test) as well as for motor abilities (Rota rod). The high dimensional behavioral results were reduced to fewer components associated with spatial cognition, general activity, anxiety-, and depression-like behavior and motor ability. The loading scores of individual rats on these different components allow an assessment and the distribution of individual features in a population of animals. The reduced number of components can be used also for statistical calculations like appropriate sample sizes for valid discriminations between experimental groups, which otherwise have to be done on each variable. Because the animals were intact, untreated and experimentally naïve the results reflect trait patterns of behavior and thus individuality. The distribution of animals with high or low levels of anxiety, depressive-like behavior, general activity and cognitive features in a local population provides information of the probability of their appeareance in experimental samples and thus may help to avoid biases. However, such an analysis initially requires a large cohort of animals in order to gain a valid assessment. PMID:28261069
Feyissa, Daniel D; Aher, Yogesh D; Engidawork, Ephrem; Höger, Harald; Lubec, Gert; Korz, Volker
2017-01-01
Animal models for anxiety, depressive-like and cognitive diseases or aging often involve testing of subjects in behavioral test batteries. The large number of test variables with different mean variations and within and between test correlations often constitute a significant problem in determining essential variables to assess behavioral patterns and their variation in individual animals as well as appropriate statistical treatment. Therefore, we applied a multivariate approach (principal component analysis) to analyse the behavioral data of 162 male adult Sprague-Dawley rats that underwent a behavioral test battery including commonly used tests for spatial learning and memory (holeboard) and different behavioral patterns (open field, elevated plus maze, forced swim test) as well as for motor abilities (Rota rod). The high dimensional behavioral results were reduced to fewer components associated with spatial cognition, general activity, anxiety-, and depression-like behavior and motor ability. The loading scores of individual rats on these different components allow an assessment and the distribution of individual features in a population of animals. The reduced number of components can be used also for statistical calculations like appropriate sample sizes for valid discriminations between experimental groups, which otherwise have to be done on each variable. Because the animals were intact, untreated and experimentally naïve the results reflect trait patterns of behavior and thus individuality. The distribution of animals with high or low levels of anxiety, depressive-like behavior, general activity and cognitive features in a local population provides information of the probability of their appeareance in experimental samples and thus may help to avoid biases. However, such an analysis initially requires a large cohort of animals in order to gain a valid assessment.
Advances in the flux-coordinate independent approach
NASA Astrophysics Data System (ADS)
Stegmeir, Andreas; Maj, Omar; Coster, David; Lackner, Karl; Held, Markus; Wiesenberger, Matthias
2017-04-01
The flux-coordinate independent approach (FCI) offers a promising solution to deal with a separatrix and X-point(s) in diverted tokamaks. Whereas the discretisation of perpendicular operators (with respect to magnetic field) is straight forward, the major complexity lies in the discretisation of parallel operators, for which field line tracing and interpolation is employed. A discrete version for the parallel diffusion operator was proposed in Stegmeir et al. (2016), which maintains the self-adjointness property on the discrete level and exhibits only very low numerical perpendicular diffusion/pollution. However, in situations where the field line map is strongly distorted this scheme revealed its limits. Moreover, the appearance of small scale corrugations deteriorated the convergence order with respect to spatial resolution (Held et al., 2016). In this paper we present an extension to the scheme where the parallel gradient is reformulated via a combination of integration and interpolation. It is shown that the resultant scheme finally combines many good numerical properties, i.e. it is self-adjoint on the discrete level, it has very low numerical perpendicular diffusion, it can cope with strongly distorted maps and exhibits optimal convergence. Another subtle issue in the FCI approach is the treatment of boundary conditions, especially where magnetic field lines intersect with material plates. We present a solution based on ghost points, whose value can be set in a flexible way according to Taylor expansion around the boundary.
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m(-3)), moderate (50-99 s m(-3)), high (100-149 s m(-3)) and very high (150 < n s m(-3)), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).
NASA Astrophysics Data System (ADS)
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m-3), moderate (50-99 s m-3), high (100-149 s m-3) and very high (150 < n s m-3), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).
Using a Statistical Approach to Anticipate Leaf Wetness Duration Under Climate Change in France
NASA Astrophysics Data System (ADS)
Huard, F.; Imig, A. F.; Perrin, P.
2014-12-01
Leaf wetness plays a major role in the development of fungal plant diseases. Leaf wetness duration (LWD) above a threshold value is determinant for infection and can be seen as a good indicator of impact of climate on infection occurrence and risk. As LWD is not widely measured, several methods, based on physics and empirical approach, have been developed to estimate it from weather data. Many LWD statistical models do exist, but the lack of standard for measurements require reassessments. A new empirical LWD model, called MEDHI (Modèle d'Estimation de la Durée d'Humectation à l'Inra) was developed for french configuration for wetness sensors (angle : 90°, height : 50 cm). This deployment is different from what is usually recommended from constructors or authors in other countries (angle from 10 to 60°, height from 10 to 150 cm…). MEDHI is a decision support system based on hourly climatic conditions at time steps n and n-1 taking account relative humidity, rainfall and previously simulated LWD. Air temperature, relative humidity, wind speed, rain and LWD data from several sensors with 2 configurations were measured during 6 months in Toulouse and Avignon (South West and South East of France) to calibrate MEDHI. A comparison of empirical models : NHRH (RH threshold), DPD (dew point depression), CART (classification and regression tree analysis dependant on RH, wind speed and dew point depression) and MEDHI, using meteorological and LWD measurements obtained during 5 months in Toulouse, showed that the development of this new model MEHDI was definitely better adapted to French conditions. In the context of climate change, MEDHI was used for mapping the evolution of leaf wetness duration in France from 1950 to 2100 with the French regional climate model ALADIN under different Representative Concentration Pathways (RCPs) and using a QM (Quantile-Mapping) statistical downscaling method. Results give information on the spatial distribution of infection risks
NASA Astrophysics Data System (ADS)
Driouech, F.; Déqué, M.; Sánchez-Gómez, E.
2009-09-01
range covered by these RCMs for all the climate indices considered. In order to validate, in the case of Moroccan winter precipitation, a statistical downscaling approach that uses large scale fields to construct local scenarios of future climate change, the link between north Atlantic weather regimes and Moroccan local precipitation has been investigated, in terms of precipitation average, and the frequencies of occurrence of wet and intense precipitation days. The robustness of the statistical approach considered is evaluated using the outputs of ARPEGE-Climate and also those of the 10 ENSEMBLES-RCMs.
NASA Astrophysics Data System (ADS)
Kirchoff, Michelle
2013-10-01
For several decades there has been a debate whether heavily cratered surfaces in our solar system are in “saturation equilibrium” [e.g., 1-3; a state where crater density reaches an (quasi-) equilibrium]. Saturation equilibrium is critical to understand because otherwise the crater size-frequency distribution (SFD) shape and/or impact flux may be misinterpreted. This work explores if spatial statistics (quantitative measures of objects' distributions in space) could be a complementary approach to crater SFDs [e.g., 1-3] in determining if a heavily cratered surface is saturated. The use of spatial statistics was introduced by Lissauer et al. [4] and Squyres et al. [5]. They proposed that a crater distribution would become more spatially uniform (more evenly spaced than expected for a random distribution) as it reached saturation equilibrium. Squyres et al. [5] combined a numerical simulation of a steeply-sloped SFD (cumulative slope = -2.7) with observations of heavily cratered terrains on Rhea and Callisto to empirically show this hypothesis could be valid for these cases. However, it is still uncertain whether populations with other slopes will also become more spatially uniform as they reach saturation equilibrium. My work continues the approach of Squyres et al. [5] by combining a new numerical simulation with new observations of cratered terrains throughout the solar system. The simulation is developed in IDL and varies the input SFD slope, importance of very small craters in erasing craters (“sandblasting”), and effectiveness of ejecta in erasing craters. I report preliminary results of this modeling and observations. MRK acknowledges the support of NASA PGG grant NNX12AO51G. References: [1] Woronow, A. (1977) JGR 82, 2447-56. [2] Hartmann, W. K. & Gaskell, R. W. (1997) MAPS 32, 109-21. [3] Marchi S., et al. (2012) EPSL 325-6, 27-38. [4] Lissauer, J. J., et al. (1988) JGR 93, 13776-804. [5] Squyres, S. W., et al. (1997) Icarus 125, 67-82.
Waves and Wine: Advanced approaches for characterizing and exploiting micro-terroir
NASA Astrophysics Data System (ADS)
Hubbard, S. S.; Grote, K. R.; Freese, P.; Peterson, J. E.; Rubin, Y.
2012-12-01
uses a combination of advanced characterization techniques (including airborne imagery, microclimate, and surface geophysical data) with statistical approaches to identify vineyard zones that have fairly uniform soil, vegetation, and micrometeorological parameters. Obtained information is used in simple water balance models that can be used to design block-specific irrigation parameters. This effort has illustrated how straightforward numerical techniques and commercially available characterization approaches can be used to optimize block layout and to guide precision irrigation strategies, leading to optimized and uniform vegetation and winegrape characteristics within vineyard blocks. Recognition and incorporation of information of small scale variabilities into vineyard development and management practices could lead to winegrapes that better reflect the microterroir of the area. Advanced approaches, such as those described here, are expected to become increasingly important as available land and water resources continue to decrease, as spatially extensive datasets become less costly to collect and interpret, and as the public demand for high quality wine produced in environmentally friendly manner continues to increase.
Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach.
Kneifel, Joshua; Webb, David
2016-09-01
Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
Four-level atom dynamics and emission statistics using a quantum jump approach
NASA Astrophysics Data System (ADS)
Sandhya, S. N.
2007-01-01
Four-level atom dynamics is studied in a ladder system in the nine parameter space consisting of driving field strengths, detunings and decay constants, {Ω1,Ω2,Ω3,Δ1,Δ2,Δ3,Γ2,Γ3,Γ4} . One can selectively excite or induce two-level behavior between particular levels of ones choice by appropriately tuning the driving field strengths at three-photon resonance. The dynamics may be classified into two main regions of interest (i) small Ω2 coupling the ∣2⟩-∣3⟩ transition and (ii) large Ω2 . In case (i) one sees two-level behavior consisting of adjacent levels and in a particular region in the parameter space, there is an intermittent shelving of the electrons in one of the two subsystems. In case (ii) the levels consist of the ground state and the upper most level. Emission statistics is studied using the delay function approach in both the cases. In case (i), the behavior of the second order correlation function g2(t) , is similar to that of two-level emission for low Ω1 coupling the ∣1⟩-∣2⟩ transition, and the correlation increases with Ω1 for smaller time delays. While, in case (ii) when, in addition, Ω3 coupling the ∣3⟩-∣4⟩ transitionis kept low, g2(t) shows superpoissonian distribution, which may be attributed to three-photon processes.
Seetha, D; Velraj, G
2015-01-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated.
Optimization of Antioxidant Potential of Penicillium granulatum Bainier by Statistical Approaches
Chandra, Priyanka; Arora, Daljit Singh
2012-01-01
A three-step optimization strategy which includes one-factor-at-a-time classical method and different statistical approaches (Plackett-Burman design and response surface methodology) that were applied to optimize the antioxidant potential of Penicillium granulatum. Antioxidant activity was assayed by different procedures and compared with total phenolic content. Primarily, different carbon and nitrogen sources were screened by classical methods, which revealed sucrose and NaNO3 to be the most suitable. In second step, Plackett-Burman design also supported sucrose and NaNO3 to be the most significant. In third step, response surface analysis showed 4.5% sucrose, 0.1% NaNO3, and incubation temperature of 25°C to be the optimal conditions. Under these conditions, the antioxidant potential assayed through different procedures was 78.2%, 70.1%, and 78.9% scavenging effect for DPPH radical, ferrous ion, and nitric oxide ion, respectively. The reducing power showed an absorbance of 1.6 with 68.5% activity for FRAP assay. PMID:23724323
Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Sithamparam, Logachanthirika
2013-09-15
Production cost of enzyme is largely determined by the type of the strain and raw material used to propagate the strain. Hence, selection of the strain and raw materials is crucial in enzyme production. For Glucose oxidase (GOx), previous studies showed Aspergillus terreus UniMAP AA-1 offers a better alternative to the existing sources. Thus, a lower production cost could be logically anticipated by growing the strain in a cheaper complex media such as molasses. In this work, sugar cane molasses, supplemented with urea and carbonate salt and a locally isolated strain Aspergillus terreus UniMAP AA-1 were used to produce a crude GOx enzyme in a small scale. A statistical optimization approach namely Response Surface Methodology (RSM) was used to optimize the media components for highest GOx activity. It was found that the highest GOx activity was achieved using a combination of molasses, carbonate salt and urea at concentration 32.51, 4.58 and 0.93% (w/v), respectively. This study provides an alternative optimized media conditions for GOx production using locally available raw materials.
Ciaccio, Mark F.; Finkle, Justin D.; Xue, Albert Y.; Bagheri, Neda
2014-01-01
An organism’s ability to maintain a desired physiological response relies extensively on how cellular and molecular signaling networks interpret and react to environmental cues. The capacity to quantitatively predict how networks respond to a changing environment by modifying signaling regulation and phenotypic responses will help inform and predict the impact of a changing global enivronment on organisms and ecosystems. Many computational strategies have been developed to resolve cue–signal–response networks. However, selecting a strategy that answers a specific biological question requires knowledge both of the type of data being collected, and of the strengths and weaknesses of different computational regimes. We broadly explore several computational approaches, and we evaluate their accuracy in predicting a given response. Specifically, we describe how statistical algorithms can be used in the context of integrative and comparative biology to elucidate the genomic, proteomic, and/or cellular networks responsible for robust physiological response. As a case study, we apply this strategy to a dataset of quantitative levels of protein abundance from the mussel, Mytilus galloprovincialis, to uncover the temperature-dependent signaling network. PMID:24813462
Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G
2016-03-01
Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design.
Numerical study of chiral plasma instability within the classical statistical field theory approach
NASA Astrophysics Data System (ADS)
Buividovich, P. V.; Ulybyshev, M. V.
2016-07-01
We report on a numerical study of real-time dynamics of electromagnetically interacting chirally imbalanced lattice Dirac fermions within the classical statistical field theory approach. Namely, we perform exact simulations of the real-time quantum evolution of fermionic fields coupled to classical electromagnetic fields, which are in turn coupled to the vacuum expectation value of the fermionic electric current. We use Wilson-Dirac Hamiltonian for fermions, and noncompact action for the gauge field. In general, we observe that the backreaction of fermions on the electromagnetic field prevents the system from acquiring chirality imbalance. In the case of chirality pumping in parallel electric and magnetic fields, the electric field is screened by the produced on-shell fermions and the accumulation of chirality is hence stopped. In the case of evolution with initially present chirality imbalance, axial charge tends to transform to helicity of the electromagnetic field. By performing simulations on large lattices we show that in most cases this decay process is accompanied by the inverse cascade phenomenon, which transfers energy from short-wavelength to long-wavelength electromagnetic fields. In some simulations, however, we observe a very clear signature of inverse cascade for the helical magnetic fields that is not accompanied by the axial charge decay. This suggests that the relation between the inverse cascade and axial charge decay is not as straightforward as predicted by the simplest form of anomalous Maxwell equations.
Jung, Sungkyu; Qiao, Xingye
2014-09-01
Set classification problems arise when classification tasks are based on sets of observations as opposed to individual observations. In set classification, a classification rule is trained with N sets of observations, where each set is labeled with class information, and the prediction of a class label is performed also with a set of observations. Data sets for set classification appear, for example, in diagnostics of disease based on multiple cell nucleus images from a single tissue. Relevant statistical models for set classification are introduced, which motivate a set classification framework based on context-free feature extraction. By understanding a set of observations as an empirical distribution, we employ a data-driven method to choose those features which contain information on location and major variation. In particular, the method of principal component analysis is used to extract the features of major variation. Multidimensional scaling is used to represent features as vector-valued points on which conventional classifiers can be applied. The proposed set classification approaches achieve better classification results than competing methods in a number of simulated data examples. The benefits of our method are demonstrated in an analysis of histopathology images of cell nuclei related to liver cancer.
Heggeseth, Brianna; Harley, Kim; Warner, Marcella; Jewell, Nicholas; Eskenazi, Brenda
2015-01-01
It has been hypothesized that environmental exposures at key development periods such as in utero play a role in childhood growth and obesity. To investigate whether in utero exposure to endocrine-disrupting chemicals, dichlorodiphenyltrichloroethane (DDT) and its metabolite, dichlorodiphenyldichloroethane (DDE), is associated with childhood physical growth, we took a novel statistical approach to analyze data from the CHAMACOS cohort study. To model heterogeneity in the growth patterns, we used a finite mixture model in combination with a data transformation to characterize body mass index (BMI) with four groups and estimated the association between exposure and group membership. In boys, higher maternal concentrations of DDT and DDE during pregnancy are associated with a BMI growth pattern that is stable until about age five followed by increased growth through age nine. In contrast, higher maternal DDT exposure during pregnancy is associated with a flat, relatively stable growth pattern in girls. This study suggests that in utero exposure to DDT and DDE may be associated with childhood BMI growth patterns, not just BMI level, and both the magnitude of exposure and sex may impact the relationship.
NASA Astrophysics Data System (ADS)
Melchiorre, Massimiliano; Vergés, Jaume; Fernàndez, Manel; Coltorti, Massimo; Torne, Montserrat; Casciello, Emilio
2017-04-01
The geological evolution of the westernmost Mediterranean region is characterised by widespread volcanic activity, with subduction (orogenic) or intraplate (anorogenic) geochemical imprints. Major, trace elements and isotopic ratios of 283 orogenic and 310 anorogenic volcanic samples from the western and central Mediterranean areas were merged in a single database that was processed using a statistical approach. Factor analysis, performed using the Principal Component Analysis (PCA) method, reduced the original 36 geochemical parameters that were expressed as oxides, elements or isotopic ratios to seven factors that account for 84% of the variance. Combining these factors in binary diagrams clearly separates the anorogenic and orogenic fields. Anorogenic samples usually fall into a narrow compositional range, while orogenic rocks are characterised by greater variability and by alignment along different trends. These different trends are a result of large heterogeneities of the lithospheric mantle beneath the Mediterranean area because of extensive recycling of geochemically different lithologies, at least since Palaeozoic times. The results support the requirement for different mantle reservoirs in the origin of the Mediterranean volcanism. We find that the double subduction polarity model, recently proposed for the westernmost Mediterranean area, is compatible with the volcanic petrology of the last 30 My.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.
Imai, Atsuko; Kohda, Masakazu; Nakaya, Akihiro; Sakata, Yasushi; Murayama, Kei; Ohtake, Akira; Lathrop, Mark; Okazaki, Yasushi; Ott, Jurg
2016-11-01
In the search for sequence variants underlying disease, commonly applied filtering steps usually result in a number of candidate variants that cannot further be narrowed down. In autosomal recessive families, disease usually occurs only in one generation so that genetic linkage analysis is unlikely to help. Because homozygous recessive mutations tend to be inherited together with flanking homozygous variants, we developed a statistical method to detect pathogenic variants in autosomal recessive families: We look for differences in patterns of homozygosity around candidate variants between patients and control individuals and expect that such differences are greater for pathogenic variants than random candidate variants. In six autosomal recessive mitochondrial disease families, in which pathogenic homozygous variants have already been identified, our approach succeeded in prioritizing pathogenic mutations. Our method is applicable to single patients from recessive families with at least a few dozen control individuals from the same population; it is easy to use and is highly effective for detecting causative mutations in autosomal recessive families.
NASA Astrophysics Data System (ADS)
Seetha, D.; Velraj, G.
2015-10-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated.
Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach
NASA Astrophysics Data System (ADS)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex
2016-06-01
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to seven selected model parameters using a modified volatility basis-set (VBS) approach: four involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semivolatile and intermediate volatility organics (SIVOCs), and NOx; two involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the model parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether or not SOA that starts as semivolatile is rapidly transformed to nonvolatile SOA by particle-phase processes such as oligomerization and/or accretion, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into two subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to nonvolatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. However
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
2012-01-01
Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The
Effects of curved approach paths and advanced displays on pilot scan patterns
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Mixon, R. W.
1981-01-01
The effect on pilot scan behavior of both advanced cockpit and advanced manuevers was assessed. A series of straight-in and curved landing approaches were performed in the Terminal Configured Vehicle (TCV) simulator. Two comparisons of pilot scan behavior were made: (1) pilot scan behavior for straight-in approaches compared with scan behavior previously obtained in a conventionally equipped simulator, and (2) pilot scan behavior for straight-in approaches compared with scan behavior for curved approaches. The results indicate very similar scanning patterns during the straight-in approaches in the conventional and advanced cockpits. However, for the curved approaches pilot attention shifted to the electronic horizontal situation display (moving map), and a new eye scan path appeared between the map and the airspeed indicator. The very high dwell percentage and dwell times on the electronic displays in the TCV simulator during the final portions of the approaches suggest that the electronic attitude direction indicator was well designed for these landing approaches.
Advances in hemorrhagic stroke therapy: conventional and novel approaches.
Lapchak, Paul A; Araujo, Dalia M
2007-09-01
escalating morbidity and mortality rate associated with brain bleeding is slow, perseverance and applied translational drug development will eventually be productive. The urgent need for such therapy becomes more evident in light of concerns related to uncontrolled high blood pressure in the general population, increased use of blood thinners by the elderly (e.g., warfarin) and thrombolytics by acute ischemic stroke patients, respectively. The future of drug development for hemorrhage may require a multifaceted approach, such as combining drugs with diverse mechanisms of action. Because of the substantial benefit of factor VIIa in reducing hemorrhage volume, it should be considered as a prime drug candidate included in combination therapy as an off-label use if the FAST trial proves that the risk of thromboembolic events is not increased with drug administration. Other promising drugs that may be considered in combination include uncompetitive NMDA receptor antagonists (such as memantine), antioxidants, metalloprotease inhibitors, statins and erythropoietin analogs, all of which have been shown to reduce hemorrhage and behavioral deficits in one or more animal models.
NASA Astrophysics Data System (ADS)
Salman, Ahmad; Lapidot, Itshak; Pomerantz, Ami; Tsror, Leah; Shufan, Elad; Moreh, Raymond; Mordechai, Shaul; Huleihel, Mahmoud
2012-01-01
The early diagnosis of phytopathogens is of a great importance; it could save large economical losses due to crops damaged by fungal diseases, and prevent unnecessary soil fumigation or the use of fungicides and bactericides and thus prevent considerable environmental pollution. In this study, 18 isolates of three different fungi genera were investigated; six isolates of Colletotrichum coccodes, six isolates of Verticillium dahliae and six isolates of Fusarium oxysporum. Our main goal was to differentiate these fungi samples on the level of isolates, based on their infrared absorption spectra obtained using the Fourier transform infrared-attenuated total reflection (FTIR-ATR) sampling technique. Advanced statistical and mathematical methods: principal component analysis (PCA), linear discriminant analysis (LDA), and k-means were applied to the spectra after manipulation. Our results showed significant spectral differences between the various fungi genera examined. The use of k-means enabled classification between the genera with a 94.5% accuracy, whereas the use of PCA [3 principal components (PCs)] and LDA has achieved a 99.7% success rate. However, on the level of isolates, the best differentiation results were obtained using PCA (9 PCs) and LDA for the lower wavenumber region (800-1775 cm-1), with identification success rates of 87%, 85.5%, and 94.5% for Colletotrichum, Fusarium, and Verticillium strains, respectively.
A statistical mechanical approach for the computation of the climatic response to general forcings
NASA Astrophysics Data System (ADS)
Lucarini, V.; Sarno, S.
2011-01-01
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a
2011-01-01
Background Data requirements by governments, donors and the international community to measure health and development achievements have increased in the last decade. Datasets produced in surveys conducted in several countries and years are often combined to analyse time trends and geographical patterns of demographic and health related indicators. However, since not all datasets have the same structure, variables definitions and codes, they have to be harmonised prior to submitting them to the statistical analyses. Manually searching, renaming and recoding variables are extremely tedious and prone to errors tasks, overall when the number of datasets and variables are large. This article presents an automated approach to harmonise variables names across several datasets, which optimises the search of variables, minimises manual inputs and reduces the risk of error. Results Three consecutive algorithms are applied iteratively to search for each variable of interest for the analyses in all datasets. The first search (A) captures particular cases that could not be solved in an automated way in the search iterations; the second search (B) is run if search A produced no hits and identifies variables the labels of which contain certain key terms defined by the user. If this search produces no hits, a third one (C) is run to retrieve variables which have been identified in other surveys, as an illustration. For each variable of interest, the outputs of these engines can be (O1) a single best matching variable is found, (O2) more than one matching variable is found or (O3) not matching variables are found. Output O2 is solved by user judgement. Examples using four variables are presented showing that the searches have a 100% sensitivity and specificity after a second iteration. Conclusion Efficient and tested automated algorithms should be used to support the harmonisation process needed to analyse multiple datasets. This is especially relevant when the numbers of datasets
Molecular and statistical approaches to the detection and correction of errors in genotype databases
Brzustowicz, L.M.; Xie, X.; Merette, C.; Townsend, L.; Gilliam, T.C.; Ott, J. )
1993-11-01
Errors in genotyping data have been shown to have a significant effect on the estimation of recombination fractions in high-resolution genetic maps. Previous estimates of errors in existing databases have been limited to the analysis of relatively few markers and have suggested rates in the range 0.5%-1.5%. The present study capitalizes on the fact that within the Centre d'Etude du Polymorphisme Humain (CEPH) collection of reference families, 21 individuals are members of more than one family, with separate DNA samples provided by CEPH for each appearance of these individuals. By comparing the genotypes of these individuals in each of the families in which they occur, an estimated error rate of 1.4% was calculated for all loci in the version 4.0 CEPH database. Removing those individuals who were clearly identified by CEPH as appearing in more than one family resulted in a 3.0% error rate for the remaining samples, suggesting that some error checking of the identified repeated individuals may occur prior to data submission. An error rate of 3.0% for version 4.0 data was also obtained for four chromosome 5 markers that were retyped through the entire CEPH collection. The effects of these errors on a multipoint map were significant, with a total sex-averaged length of 36.09 cM with the errors, and 19.47 cM with the errors corrected. Several statistical approaches to detect and allow for errors during linkage analysis are presented. One method, which identified families containing possible errors on the basis of the impact on the maximum lod score, showed particular promise, especially when combined with the limited retyping of the identified families. The impact of the demonstrated error rate in an established genotype database on high-resolution mapping is significant, raising the question of the overall value of incorporating such existing data into new genetic maps. 15 refs., 8 tabs.
NASA Astrophysics Data System (ADS)
Zhou, Ying; Cheng, Shuiyuan; Chen, Dongsheng; Lang, Jianlei; Zhao, Beibei; Wei, Wei
2014-09-01
This paper, which aims at the primary gaseous air pollutants (i.e., SO2, NOx, VOCS and CO), is the third paper in the series papers published in Atmospheric Environment to develop new emission estimation models by the regression method. A group of regression models for various industrial and non-industrial sectors were proposed based on an emission investigation case study of Handan region in northern China. The main data requirements of the regression models for industrial sectors were coal consumption, oil consumption, gaseous fuel consumption and annual industrial output. The data requirements for non-industrial sector emission estimations were the population, the number of resident population households, the vehicle population, the area of construction sites, the forestland area, and the orchard area. The models were then applied to Tangshan region in northern China. The results showed that the developed regression models had relatively satisfactory performance. The modeling errors at the regional level for SO2, NOx, VOCS and CO were -16.5%, -10.6%, -11.8% and -22.6%, respectively. The corresponding modeling errors at the county level were 39.9%, 33.9%, 46.3% and 46.9%, respectively. The models were also applied to other regions in northern China. The results revealed that the new models could develop emission inventories with generally lower error than found in previous emission inventory studies. The developed models had the advantages of only using publicly available statistical information for developing high-accuracy and high-resolution emission inventory, without requiring detailed data investigation which is necessary by conventional “bottom-up” emission inventory development approach.
Yang, Jinzhong; Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A.; Zhang, Lifei; Balter, Peter; Court, Laurence E.; Li, X. Allen; Dong, Lei
2014-05-01
Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively and to
NASA Astrophysics Data System (ADS)
Otero, Noelia; Butler, Tim; Sillmann, Jana
2015-04-01
Air pollution has become a serious problem in many industrialized and densely-populated urban areas due to its negative effects on human health, damages agricultural crops and ecosystems. The concentration of air pollutants is the result of several factors, including emission sources, lifetime and spatial distribution of the pollutants, atmospheric properties and interactions, wind speed and direction, and topographic features. Episodes of air pollution are often associated with stationary or slowly migrating anticyclonic (high-pressure) systems that reduce advection, diffusion, and deposition of atmospheric pollutants. Certain weather conditions facilitate the concentration of pollutants, such as the incidence of light winds that contributes to the increasing of stagnation episodes affecting air quality. Therefore, the atmospheric circulation plays an important role in air quality conditions that are affected by both, synoptic and local scale processes. This study assesses the influence of the large-scale circulation along with meteorological conditions on tropospheric ozone in Europe. The frequency of weather types (WTs) is examined under a novel approach, which is based on an automated version of the Lamb Weather Types catalog (Jenkinson and Collison, 1977). Here, we present an implementation of such classification point-by-point over the European domain. Moreover, the analysis uses a new grid-averaged climatology (1°x1°) of daily surface ozone concentrations from observations of individual sites that matches the resolution of global models (Schnell,et al., 2014). Daily frequency of WTs and meteorological conditions are combined in a multiple regression approach for investigating the influence on ozone concentrations. Different subsets of predictors are examined within multiple linear regression models (MLRs) for each grid cell in order to identify the best regression model. Several statistical metrics are applied for estimating the robustness of the
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
A Constructivist Approach in a Blended E-Learning Environment for Statistics
ERIC Educational Resources Information Center
Poelmans, Stephan; Wessa, Patrick
2015-01-01
In this study, we report on the students' evaluation of a self-constructed constructivist e-learning environment for statistics, the compendium platform (CP). The system was built to endorse deeper learning with the incorporation of statistical reproducibility and peer review practices. The deployment of the CP, with interactive workshops and…
A group-theoretic approach to constructions of non-relativistic spin-statistics
NASA Astrophysics Data System (ADS)
Harrison, J. M.; Robbins, J. M.
2000-11-01
We give a group-theoretical generalization of Berry and Robbins' treatment of identical particles with spin. The original construction, which leads to the correct spin-statistics relation, is seen to arise from particular irreducible representations—the totally symmetric representations—of the group SU(4). Here we calculate the exchange signs and corresponding statistics for all irreducible representations of SU(4).
Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.
2015-06-15
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.
NASA Astrophysics Data System (ADS)
Stein, Thorwald; Hogan, Robin; Hanley, Kirsty; Clark, Peter; Halliwell, Carol; Lean, Humphrey; Nicol, John; Plant, Robert
2016-04-01
National weather services increasingly use convection-permitting simulations to assist in their operational forecasts. The skill in forecasting rainfall from convection is much improved in such simulations compared to global models that rely on parameterisation schemes, but it is less obvious if and how increased model resolution or more advanced mixing and microphysics schemes improve the physical representation of convective storms. Here, we present a novel statistical approach using high-resolution radar data to evaluate the morphology, dynamics, and evolution of convective storms over southern England. In the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) we have used an innovative track-and-scan approach to target individual storms with the Chilbolton radar, which measures cloud and precipitation at scales less than 300m out to 100km. These radar observations provide three-dimensional storm volumes and estimates of updraft core strength and sizes at adequate scales to test high-resolution models. For two days of interest, we have run the Met Office forecast model at its operational configuration (1.5km grid length) and at grid lengths of 500m, 200m, and 100m. Radar reflectivity and Doppler winds were simulated from the model cloud and wind output for a like-with-like comparison against the radar observations. Our results show that although the 1.5km simulation produces similar domain-averaged rainfall as the other simulations, the majority of rainfall is produced from storms that are a factor 1.5-2 larger than observed as well as longer lived, while the updrafts of these storms are an order of magnitude greater than estimated from observations. We generally find improvements as model resolution increases, although our results depend strongly on the mixing-length parameter in the model turbulence scheme. Our findings highlight the promising role of high-resolution radar data and observational strategies targeting individual storms
2015-09-30
active sonar. Toward this goal, fundamental advances in the understanding of fish behavior , especially in aggregations, will be made under conditions...relevant to the echo statistics problem. OBJECTIVES To develop new models of behavior of fish aggregations, including the fission/fusion process...and to describe the echo statistics associated with the random fish behavior using existing formulations of echo statistics. APPROACH
NASA Astrophysics Data System (ADS)
Mazzitello, Karina I.; Candia, Julián
2012-12-01
In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.
A System Approach to Advanced Practice Clinician Standardization and High Reliability.
Okuno-Jones, Susan; Siehoff, Alice; Law, Jennifer; Juarez, Patricia
Advanced practice clinicians (APCs) are an integral part of the health care team. Opportunities exist within Advocate Health Care to standardize and optimize APC practice across the system. To enhance the role and talents of APCs, an approach to role definition and optimization of practice and a structured approach to orientation and evaluation are shared. Although in the early stages of development, definition and standardization of accountabilities in a framework to support system changes are transforming the practice of APCs.
ERIC Educational Resources Information Center
Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.
2014-01-01
How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…
This paper presents a new method based on a statistical approach of estimating the uncertainty in simulating the transport and dispersion of atmospheric pollutants. The application of the method has been demonstrated by using observations and modeling results from a tracer experi...
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
ERIC Educational Resources Information Center
Schoenborn, Charlotte A.
This report is based on data from the 1988 National Health Interview Survey on Alcohol (NHIS-Alcohol), part of the ongoing National Health Interview Survey conducted by the National Center for Health Statistics. Interviews for the NHIS are conducted in person by staff of the United States Bureau of the Census. Information is collected on each…
ERIC Educational Resources Information Center
Keefe, Francis J.; And Others
1992-01-01
Reviews and highlights recent research advances and future research directions concerned with behavioral and cognitive-behavioral approaches to chronic pain. Reviews assessment research on studies of social context of pain, relationship of chronic pain to depression, cognitive variables affecting pain, and comprehensive assessment measures.…
Evaluating New Approaches to Teaching of Sight-Reading Skills to Advanced Pianists
ERIC Educational Resources Information Center
Zhukov, Katie
2014-01-01
This paper evaluates three teaching approaches to improving sight-reading skills against a control in a large-scale study of advanced pianists. One hundred pianists in four equal groups participated in newly developed training programmes (accompanying, rhythm, musical style and control), with pre- and post-sight-reading tests analysed using…
Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach
ERIC Educational Resources Information Center
Holmes, Karen Y.; Dodd, Brett A.
2012-01-01
In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)
Zhang, Guozhu; Truong, Lisa; Tanguay, Robert L.
2017-01-01
Zebrafish have become an important alternative model for characterizing chemical bioactivity, partly due to the efficiency at which systematic, high-dimensional data can be generated. However, these new data present analytical challenges associated with scale and diversity. We developed a novel, robust statistical approach to characterize chemical-elicited effects in behavioral data from high-throughput screening (HTS) of all 1,060 Toxicity Forecaster (ToxCast™) chemicals across 5 concentrations at 120 hours post-fertilization (hpf). Taking advantage of the immense scale of data for a global view, we show that this new approach reduces bias introduced by extreme values yet allows for diverse response patterns that confound the application of traditional statistics. We have also shown that, as a summary measure of response for local tests of chemical-associated behavioral effects, it achieves a significant reduction in coefficient of variation compared to many traditional statistical modeling methods. This effective increase in signal-to-noise ratio augments statistical power and is observed across experimental periods (light/dark conditions) that display varied distributional response patterns. Finally, we integrated results with data from concomitant developmental endpoint measurements to show that appropriate statistical handling of HTS behavioral data can add important biological context that informs mechanistic hypotheses. PMID:28099482
Advanced subsonic transport approach noise: The relative contribution of airframe noise
NASA Technical Reports Server (NTRS)
Willshire, William L., Jr.; Garber, Donald P.
1992-01-01
With current engine technology, airframe noise is a contributing source for large commercial aircraft on approach, but not the major contributor. With the promise of much quieter jet engines with the planned new generation of high-by-pass turbofan engines, airframe noise has become a topic of interest in the advanced subsonic transport research program. The objective of this paper is to assess the contribution of airframe noise relative to the other aircraft noise sources on approach. The assessment will be made for a current technology large commercial transport aircraft and for an envisioned advanced technology aircraft. NASA's Aircraft Noise Prediction Program (ANOPP) will be used to make total aircraft noise predictions for these two aircraft types. Predicted noise levels and areas of noise contours will be used to determine the relative importance of the contributing approach noise sources. The actual set-up decks used to make the ANOPP runs for the two aircraft types are included in appendixes.
Ball, Robert; Horne, Dale; Izurieta, Hector; Sutherland, Andrea; Walderhaug, Mark; Hsu, Henry
2011-05-01
The public health community faces increasing demands for improving vaccine safety while simultaneously increasing the number of vaccines available to prevent infectious diseases. The passage of the US Food and Drug Administration (FDA) Amendment Act of 2007 formalized the concept of life-cycle management of the risks and benefits of vaccines, from early clinical development through many years of use in large numbers of people. Harnessing scientific and technologic advances is necessary to improve vaccine-safety evaluation. The Office of Biostatistics and Epidemiology in the Center for Biologics Evaluation and Research is working to improve the FDA's ability to monitor vaccine safety by improving statistical, epidemiologic, and risk-assessment methods, gaining access to new sources of data, and exploring the use of genomics data. In this article we describe the current approaches, new resources, and future directions that the FDA is taking to improve the evaluation of vaccine safety.
Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.
2013-01-01
To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183
NASA Astrophysics Data System (ADS)
Khalili, Malika; Van Nguyen, Van Thanh
2016-11-01
Global Climate Models (GCMs) have been extensively used in many climate change impact studies. However, the coarser resolution of these GCM outputs is not adequate to assess the potential effects of climate change on local scale. Downscaling techniques have thus been proposed to resolve this problem either by dynamical or statistical approaches. The statistical downscaling (SD) methods are widely privileged because of their simplicity of implementation and use. However, many of them ignore the observed spatial dependence between different locations, which significantly affects the impact study results. An improved multi-site SD approach is thus presented in this paper to downscaling of daily precipitation at many sites concurrently. This approach is based on a combination of multiple regression models for rainfall occurrences and amounts and the Singular Value Decomposition technique, which models the stochastic components of these regression models to preserve accurately the space-time statistical properties of the daily precipitation. Furthermore, this method was able to describe adequately the intermittency property of the precipitation processes. The proposed approach has been assessed using 10 rain gauges located in the southwest region of Quebec and southeast region of Ontario in Canada, and climate predictors from the National Centers for Environmental Prediction/National Centre for Atmospheric Research re-analysis data set. The results have indicated the ability of the proposed approach to reproduce accurately multiple observed statistical properties of the precipitation occurrences and amounts, the at-site temporal persistence, the spatial dependence between sites and the temporal variability and spatial intermittency of the precipitation processes.
NASA Astrophysics Data System (ADS)
Kruscha, Alexandra; Lindner, Benjamin
2016-08-01
We consider a homogeneous population of stochastic neurons that are driven by weak common noise (stimulus). To capture and analyze the joint firing events within the population, we introduce the partial synchronous output of the population. This is a time series defined by the events that at least a fixed fraction γ ∈[0 ,1 ] of the population fires simultaneously within a small time interval. For this partial synchronous output we develop two analytical approaches to the correlation statistics. In the Gaussian approach we represent the synchronous output as a nonlinear transformation of the summed population activity and approximate the latter by a Gaussian process. In the combinatorial approach the synchronous output is represented by products of box-filtered spike trains of the single neurons. In both approaches we use linear-response theory to derive approximations for statistical measures that hold true for weak common noise. In particular, we calculate the mean value and power spectrum of the synchronous output and the cross-spectrum between synchronous output and common noise. We apply our results to the leaky integrate-and-fire neuron model and compare them to numerical simulations. The combinatorial approach is shown to provide a more accurate description of the statistics for small populations, whereas the Gaussian approximation yields compact formulas that work well for a sufficiently large population size. In particular, in the Gaussian approximation all statistical measures reveal a symmetry in the synchrony threshold γ around the mean value of the population activity. Our results may contribute to a better understanding of the role of coincidence detection in neural signal processing.
A multimodal wave spectrum-based approach for statistical downscaling of local wave climate
Hegermiller, Christie; Antolinez, Jose A A; Rueda, Ana C; Camus, Paula; Perez, Jorge; Erikson, Li; Barnard, Patrick; Mendez, Fernando J
2017-01-01
Characterization of wave climate by bulk wave parameters is insufficient for many coastal studies, including those focused on assessing coastal hazards and long-term wave climate influences on coastal evolution. This issue is particularly relevant for studies using statistical downscaling of atmospheric fields to local wave conditions, which are often multimodal in large ocean basins (e.g. the Pacific). Swell may be generated in vastly different wave generation regions, yielding complex wave spectra that are inadequately represented by a single set of bulk wave parameters. Furthermore, the relationship between atmospheric systems and local wave conditions is complicated by variations in arrival time of wave groups from different parts of the basin. Here, we address these two challenges by improving upon the spatiotemporal definition of the atmospheric predictor used in statistical downscaling of local wave climate. The improved methodology separates the local wave spectrum into “wave families,” defined by spectral peaks and discrete generation regions, and relates atmospheric conditions in distant regions of the ocean basin to local wave conditions by incorporating travel times computed from effective energy flux across the ocean basin. When applied to locations with multimodal wave spectra, including Southern California and Trujillo, Peru, the new methodology improves the ability of the statistical model to project significant wave height, peak period, and direction for each wave family, retaining more information from the full wave spectrum. This work is the base of statistical downscaling by weather types, which has recently been applied to coastal flooding and morphodynamic applications.
Piloting a Blended Approach to Teaching Statistics in a College of Education: Lessons Learned
ERIC Educational Resources Information Center
Xu, Yonghong Jade; Meyer, Katrina A.; Morgan, Dianne
2008-01-01
This study investigated the performance of graduate students enrolled in introductory statistics courses. The course in Fall 2005 was delivered in a traditional face-to-face manner and the same course in Fall 2006 was blended by using an online commercial tutoring system (ALEKS) and making attendance of several face-to-face classes optional. There…
The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...
Integrating Real-Life Data Analysis in Teaching Descriptive Statistics: A Constructivist Approach
ERIC Educational Resources Information Center
Libman, Zipora
2010-01-01
This article looks at a process of integrating real-life data investigation in a course on descriptive statistics. Referring to constructivist perspectives, this article suggests a look at the potential of inculcating alternative teaching methods that encourage students to take a more active role in their own learning and participate in the…
A unifying approach for food webs, phylogeny, social networks, and statistics
Chiu, Grace S.; Westveld, Anton H.
2011-01-01
A food web consists of nodes, each consisting of one or more species. The role of each node as predator or prey determines the trophic relations that weave the web. Much effort in trophic food web research is given to understand the connectivity structure, or the nature and degree of dependence among nodes. Social network analysis (SNA) techniques—quantitative methods commonly used in the social sciences to understand network relational structure—have been used for this purpose, although postanalysis effort or biological theory is still required to determine what natural factors contribute to the feeding behavior. Thus, a conventional SNA alone provides limited insight into trophic structure. Here we show that by using novel statistical modeling methodologies to express network links as the random response of within- and internode characteristics (predictors), we gain a much deeper understanding of food web structure and its contributing factors through a unified statistical SNA. We do so for eight empirical food webs: Phylogeny is shown to have nontrivial influence on trophic relations in many webs, and for each web trophic clustering based on feeding activity and on feeding preference can differ substantially. These and other conclusions about network features are purely empirical, based entirely on observed network attributes while accounting for biological information built directly into the model. Thus, statistical SNA techniques, through statistical inference for feeding activity and preference, provide an alternative perspective of trophic clustering to yield comprehensive insight into food web structure. PMID:21896716
ERIC Educational Resources Information Center
Mackenzie, Helen; Tolley, Harry; Croft, Tony; Grove, Michael; Lawson, Duncan
2016-01-01
This article explores the perspectives of three senior managers in higher education institutions in England regarding their mathematics and statistics support provision. It does so by means of a qualitative case study that draws upon the writing of Ronald Barnett about the identity of an "ecological" university, along with metaphors…
ERIC Educational Resources Information Center
König, Johannes
2015-01-01
The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME) of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged…
Code of Federal Regulations, 2010 CFR
2010-01-01
...-Ratings-Based and Advanced Measurement Approaches F Appendix F to Part 208 Banks and Banking FEDERAL... Guidelines for Banks: Internal-Ratings-Based and Advanced Measurement Approaches Part IGeneral Provisions... Equity Exposures Section 51Introduction and Exposure Measurement Section 52Simple Risk Weight...
Code of Federal Regulations, 2010 CFR
2010-01-01
...-Ratings-Based and Advanced Measurement Approaches C Appendix C to Part 567 Banks and Banking OFFICE OF... Capital Requirements—Internal-Ratings-Based and Advanced Measurement Approaches Part IGeneral Provisions... Equity Exposures Section 51Introduction and Exposure Measurement Section 52Simple Risk Weight...
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences.
Rodrigo, C.; Rodrigo, M.; Dunne, K.; Morgan, L.
1998-07-01
Typical wetland creations are based on sizable surface water input provided by stream diversion or large surface water run-on inputs to enhance the success of the establishing the wetland hydrology. However, not all landscapes provide sizable hydrological inputs from these sources. This paper presents a case history and statistical approach adopted to model groundwater for a wetland created in a landscape position which does not allow for the use of surface water inputs.
Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G
2013-03-01
Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season.
Prediction of fragmentation of kidney stones: A statistical approach from NCCT images
Moorthy, Krishna; Krishnan, Meenakshy
2016-01-01
Introduction: We sought to develop a system to predict the fragmentation of stones using non-contrast computed tomography (NCCT) image analysis of patients with renal stone disease. Methods: The features corresponding to first order statistical (FOS) method were extracted from the region of interest in the NCCT scan image of patients undergoing extracorporeal shockwave lithotripsy (ESWL) treatment and the breakability was predicted using neural network. Results: When mean was considered as the feature, the results indicated that the model developed for prediction had sensitivity of 80.7% in true positive (TP) cases. The percent accuracy in identifying correctly the TP and true negative (TN) cases was 90%. TN cases were identified with a specificity of 98.4%. Conclusions: Application of statistical methods and training the neural network system will enable accurate prediction of the fragmentation and outcome of ESWL treatment. PMID:28255414
Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures.
NASA Astrophysics Data System (ADS)
Liu, Wenjia; Schmittmann, Beate; Zia, R. K. P.
2012-02-01
Network studies have played a central role for understanding many systems in nature - e.g., physical, biological, and social. So far, much of the focus has been the statistics of networks in isolation. Yet, many networks in the world are coupled to each other. Recently, we considered this issue, in the context of two interacting social networks. In particular, We studied networks with two different preferred degrees, modeling, say, introverts vs. extroverts, with a variety of ``rules for engagement.'' As a first step towards an analytically accessible theory, we restrict our attention to an ``extreme scenario'': The introverts prefer zero contacts while the extroverts like to befriend everyone in the society. In this ``maximally frustrated'' system, the degree distributions, as well as the statistics of cross-links (between the two groups), can depend sensitively on how a node (individual) creates/breaks its connections. The simulation results can be reasonably well understood in terms of an approximate theory.
Statistical Approaches for Analyzing Mutational Spectra: Some Recommendations for Categorical Data
Piegorsch, W. W.; Bailer, A. J.
1994-01-01
In studies examining the patterns or spectra of mutational damage, the primary variables of interest are expressed typically as discrete counts within defined categories of damage. Various statistical methods can be applied to test for heterogeneity among the observed spectra of different classes, treatment groups and/or doses of a mutagen. These are described and compared via computer simulations to determine which are most appropriate for practical use in the evaluation of spectral data. Our results suggest that selected, simple modifications of the usual Pearson X(2) statistic for contingency tables provide stable false positive error rates near the usual α = 0.05 level and also acceptable sensitivity to detect differences among spectra. Extensions to the problem of identifying individual differences within and among mutant spectra are noted. PMID:8138174
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2016-09-21
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
Changes in Wave Climate from a Multi-model Global Statistical projection approach.
NASA Astrophysics Data System (ADS)
Camus, Paula; Menendez, Melisa; Perez, Jorge; Losada, Inigo
2016-04-01
Despite their outstanding relevance in coastal impacts related to climate change (i.e. inundation, global beach erosion), ensemble products of global wave climate projections from the new Representative Concentration Pathways (RCPs) described by the IPCC are rather limited. This work shows a global study of changes in wave climate under several scenarios in which a new statistical method is applied. The method is based on the statistical relationship between meteorological conditions over the geographical area of wave generation (predictor) and the resulting wave characteristics for a particular location (predictand). The atmospheric input variables used in the statistical method are sea level pressure anomalies and gradients over the spatial and time scales information characterized by ESTELA maps (Perez et al. 2014). ESTELA provides a characterization of the area of wave influence of any particular ocean location worldwide, which includes contour lines of wave energy and isochrones of travel time in that area. Principal components is then applied over the sea level pressure information of the ESTELA region in order to define a multi-regression statistical model based on several data mining techniques. Once the multi-regression technique is defined and validated from historical information of atmospheric reanalysis (predictor) and wave hindcast (predictand) this method has been applied by using more than 35 Global Climate Models from CMIP5 to estimate changes in several parameters of the sea state (e.g. significant wave height, peak period) at seasonal and annual scale during the last decades of 21st century. The uncertainty of the estimated wave climate changes in the ensemble is also provided and discussed.
Tasaki, Hal
2016-04-29
Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and the "apparatus" which operates on the former, and assume that the whole system evolves autonomously. This provides a satisfactory derivation of the second law for macroscopic systems.
Okamura, H; Punt, A E; Semba, Y; Ichinokawa, M
2013-04-01
This paper proposes a new and flexible statistical method for marginal increment analysis that directly accounts for periodicity in circular data using a circular-linear regression model with random effects. The method is applied to vertebral marginal increment data for Alaska skate Bathyraja parmifera. The best fit model selected using the AIC indicates that growth bands are formed annually. Simulation, where the underlying characteristics of the data are known, shows that the method performs satisfactorily when uncertainty is not extremely high.
HotPatch: A Statistical Approach to Finding Biologically Relevant Features on Protein Surfaces
Pettit, Frank K.; Bare, Emiko; Tsai, Albert; Bowie, James U.
2007-01-01
We describe a fully automated algorithm for finding functional sites on protein structures. Our method finds surface patches of unusual physicochemical properties on protein structures, and estimates the patches’ probability of overlapping functional sites. Other methods for predicting the locations of specific types of functional sites exist, but in previous analyses, it has been difficult to compare methods when they are applied to different types of sites. Thus, we introduce a new statistical framework that enables rigorous comparisons of the usefulness of different physicochemical properties for predicting virtually any kind of functional site. The program’s statistical models were trained for 11 individual properties (electrostatics, concavity, hydrophobicity, etc.) and for 15 neural network combination properties, all optimized and tested on 15 diverse protein functions. To simulate what to expect if the program were run on proteins of unknown function, as might arise from structural genomics, we tested it on 618 proteins of diverse mixed functions. In the higher-scoring top half of all predictions, a functional residue could typically be found within the first 1.7 residues chosen at random. The program may or may not use partial information about the protein’s function type as an input, depending on which statistical model the user chooses to employ. If function type is used as an additional constraint, prediction accuracy usually increases, and is particularly good for enzymes, DNA-interacting sites, and oligomeric interfaces. The program can be accessed online at http://hotpatch.mbi.ucla.edu. PMID:17451744
Boyacioglu, Hayal; Boyacioglu, Hülya
2009-05-01
In the paper water supply profile of Turkey was examined. In this scope, the questionnaire survey conducted by Turkish Statistical Institute in 2004 to investigate annual amount of water abstracted to drinking water networks by type of resources in 81 provinces was evaluated. In the questionnaire, sources were grouped under five categories as spring, (artificial) lake, river, reservoir and well. Due to the complex and multivariate characteristics of the data sets, to replace a large collection of variables with a smaller number of factors the statistical method "factor analysis" was performed. Results revealed that, water supply systems in the country were mainly governed by groundwater sources (well and/or spring). However, in the northeastern part of the country, rivers were allocated for drinking water supply. On the other hand, reservoir dependent cities were densely located in Marmara, Central Anatolia and Southeast Anatolia Regions. This study showed that statistics based classification methods assist decision makers to extract information from multidimensional complex data sets representing environmental conditions.
DEVELOPMENT OF AN ADVANCED APPROACH FOR NEXT-GENERATION INTEGRATED RESERVOIR CHARACTERIZATION
Scott R. Reeves
2005-04-01
Accurate, high-resolution, three-dimensional (3D) reservoir characterization can provide substantial benefits for effective oilfield management. By doing so, the predictive reliability of reservoir flow models, which are routinely used as the basis for investment decisions involving hundreds of millions of dollars and designed to recover millions of barrels of oil, can be significantly improved. Even a small improvement in incremental recovery for high-value assets can result in important contributions to bottom-line profitability. Today's standard practice for developing a 3D reservoir description is to use seismic inversion techniques. These techniques make use of geostatistics and other stochastic methods to solve the inverse problem, i.e., to iteratively construct a likely geologic model and then upscale and compare its acoustic response to that actually observed in the field. This method has several inherent flaws, such as: (1) The resulting models are highly non-unique; multiple equiprobable realizations are produced, meaning (2) The results define a distribution of possible outcomes; the best they can do is quantify the uncertainty inherent in the modeling process, and (3) Each realization must be run through a flow simulator and history matched to assess it's appropriateness, and therefore (4) The method is labor intensive and requires significant time to complete a field study; thus it is applied to only a small percentage of oil and gas producing assets. A new approach to achieve this objective was first examined in a Department of Energy (DOE) study performed by Advanced Resources International (ARI) in 2000/2001. The goal of that study was to evaluate whether robust relationships between data at vastly different scales of measurement could be established using virtual intelligence (VI) methods. The proposed workflow required that three specific relationships be established through use of artificial neural networks (ANN's): core-to-log, log
Recent advances in approach to treatment of genetic disorders: clinicians perspective.
Gupta, Neerja; Kabra, Madhulika
2007-05-01
There is no cure for most of the genetic disorders. The only option in most situations is prevention by counseling and prenatal diagnosis. However, over a decade, with the completion of the human genome project and other advances there is better understanding of pathogenesis, improvement in diagnostic strategies and various treatment avenues are opening up for these disorders. The aim of this article is to make the pediatricians aware of the approaches to treatment of common genetic disorders and recent available therapeutic interventions.
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
A statistical approach to develop a detailed soot growth model using PAH characteristics
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael; Sander, Markus; Patterson, Robert; West, Richard; Kraft, Markus
2009-04-15
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. The KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)
Observations of non-Rayleigh statistics in the approach to photon localization.
Stoytchev, M; Genack, A Z
1999-02-15
We measure the distribution of intensity of microwave radiation transmitted through absorbing random waveguides of lengths L up to localization length xi . For large intensity values the distribution is given by a negative stretched exponential to the 1/2 power, in agreement with predictions by Nieuwenhuizen and van Rossum [Phys. Rev. Lett. 74, 2674 (1995)] for diffusing waves in nonabsorbing samples, as opposed to a negative exponential given by Rayleigh statistics. The intensity distribution is well described by a transform derived by Kogan and Kaveh [Phys. Rev. B 52, R3813 (1995)] of the measured distribution of total transmission.
A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Landau, D. P.; Tsai, Shan-Ho; Exler, M.
2004-10-01
We describe a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way. Instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states. The probability can be computed at any temperature by weighting the density of states by the appropriate Boltzmann factor. Thermodynamic properties can be determined from suitable derivatives of the partition function and, unlike "standard" methods, the free energy and entropy can also be computed directly. To demonstrate the simplicity and power of the algorithm, we apply it to models exhibiting first-order or second-order phase transitions.
Firing statistics and correlations in spiking neurons: a level-crossing approach.
Badel, Laurent
2011-10-01
We present a time-dependent level-crossing theory for linear dynamical systems perturbed by colored Gaussian noise. We apply these results to approximate the firing statistics of conductance-based integrate-and-fire neurons receiving excitatory and inhibitory Poissonian inputs. Analytical expressions are obtained for three key quantities characterizing the neuronal response to time-varying inputs: the mean firing rate, the linear response to sinusoidally modulated inputs, and the pairwise spike correlation for neurons receiving correlated inputs. The theory yields tractable results that are shown to accurately match numerical simulations and provides useful tools for the analysis of interconnected neuronal populations.
FUN.STAT Quantile Approach to Two Sample Statistical Data Analysis.
1983-04-01
Snodgrass" actually written by Mark Twain ? Let X and Y respectively denote the proportion of three-letter words in (eight) Twain essays and (ten...concerning equality of populations) * of the rank sum equal to 110, or equivalently of the statistic -* 1 m R. mj=1l Note E[T] =0.5. For the Mark Twain data...test procedure would decide that Twain wrote the Snodgrass papers. * IV. Pseudo-correlations. The following table lists for the Mark Twain adata the
Robust statistical approaches for local planar surface fitting in 3D laser scanning data
NASA Astrophysics Data System (ADS)
Nurunnabi, Abdul; Belton, David; West, Geoff
2014-10-01
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
Drug-excipient compatibility testing using a high-throughput approach and statistical design.
Wyttenbach, Nicole; Birringer, Christian; Alsenz, Jochem; Kuentz, Martin
2005-01-01
The aim of our research was to develop a miniaturized high throughput drug-excipient compatibility test. Experiments were planned and evaluated using statistical experimental design. Binary mixtures of a drug, acetylsalicylic acid, or fluoxetine hydrochloride, and of excipients commonly used in solid dosage forms were prepared at a ratio of approximately 1:100 in 96-well microtiter plates. Samples were exposed to different temperature (40 degrees C/ 50 degrees C) and humidity (10%/75%) for different time (1 week/4 weeks), and chemical drug degradation was analyzed using a fast gradient high pressure liquid chromatography (HPLC). Categorical statistical design was applied to identify the effects and interactions of time, temperature, humidity, and excipient on drug degradation. Acetylsalicylic acid was least stable in the presence of magnesium stearate, dibasic calcium phosphate, or sodium starch glycolate. Fluoxetine hydrochloride exhibited a marked degradation only with lactose. Factor-interaction plots revealed that the relative humidity had the strongest effect on the drug excipient blends tested. In conclusion, the developed technique enables fast drug-excipient compatibility testing and identification of interactions. Since only 0.1 mg of drug is needed per data point, fast rational preselection of the pharmaceutical additives can be performed early in solid dosage form development.
F.R. Carrillo-Pedroza; A. Davalos Sanchez; M. Soria-Aguilar; E.T. Pecina Trevino
2009-07-15
The removal of pyritic sulfur from a Mexican sub-bituminous coal in nitric, sulfuric, and hydrochloric acid solutions was investigated. The effect of the type and concentration of acid, in the presence of hydrogen peroxide and ozone as oxidants, in a temperature range of 20-60{sup o}C, was studied. The relevant factors in pyrite dissolution were determined by means of the statistical analysis of variance and optimized by the response surface method. Kinetic models were also evaluated, showing that the dissolution of pyritic sulfur follows the kinetic model of the shrinking core model, with diffusion through the solid product of the reaction as the controlling stage. The results of statistical analysis indicate that the use of ozone as an oxidant improves the pyrite dissolution because, at 0.25 M HNO{sub 3} or H{sub 2}SO{sub 4} at 20{sup o}C and 0.33 g/h O{sub 3}, the obtained dissolution is similar to that of 1 M H{sub 2}O{sub 2} and 1 M HNO{sub 3} or H{sub 2}SO{sub 4} at 40{sup o}C. 42 refs., 9 figs., 3 tabs.
A novel approach for statistical downscaling of future precipitation over the Indo-Gangetic Basin
NASA Astrophysics Data System (ADS)
Chaudhuri, Chiranjib; Srivastava, Rajesh
2017-04-01
We propose a novel statistical downscaling method using Global Circulation Model (GCM) rainfall and satellite based precipitation estimate Tropical Rainfall Measurement Mission (TRMM; 3B43v7) to generate a high-resolution rainfall (0.25° × 0.25°) estimate over the Indo-Gangetic Basin (IGB) for 9 GCM and 4 Special Report on Emissions Scenarios (SRES) combinations. These precipitation values, along with the precipitation dataset from the APHRODITE's Water Resources project are then seasonally segregated (winter, pre-monsoon, monsoon and post-monsoon) and combined into a Bayesian framework to generate probability distribution of future precipitation change at regional scale. We considered present time as 2001-2010, and 3 non-overlapping time slices 2011-2040, 2041-2070, and 2071-2100 as future. The precipitation trends are heterogeneous in space and seasons, but there is an overall consistency in trends for different future time slices. The shapes of the final probability density functions given by the kernel density estimators show varying characteristics. Compared to traditional transfer function based statistical downscaling methods our framework allows downscaling to basin level gridded rainfall rather than station specific precipitation. It also allows an integrated estimate of uncertainties arising from different sources which is an essential diagnostic when datasets from various sources are considered. Furthermore, the Bayesian framework allows the analysis of means and precisions of precipitation, even when they reveal characteristics, such as multi-modality and long tails.
Process simulation and statistical approaches for validating waste form qualification models
Kuhn, W.L.; Toland, M.R.; Pulsipher, B.A.
1989-05-01
This report describes recent progress toward one of the principal objectives of the Nuclear Waste Treatment Program (NWTP) at the Pacific Northwest Laboratory (PNL): to establish relationships between vitrification process control and glass product quality. during testing of a vitrification system, it is important to show that departures affecting the product quality can be sufficiently detected through process measurements to prevent an unacceptable canister from being produced. Meeting this goal is a practical definition of a successful sampling, data analysis, and process control strategy. A simulation model has been developed and preliminarily tested by applying it to approximate operation of the West Valley Demonstration Project (WVDP) vitrification system at West Valley, New York. Multivariate statistical techniques have been identified and described that can be applied to analyze large sets of process measurements. Information on components, tanks, and time is then combined to create a single statistic through which all of the information can be used at once to determine whether the process has shifted away from a normal condition.
NASA Astrophysics Data System (ADS)
Chandrasekaran, A.; Ravisankar, R.; Harikrishnan, N.; Satapathy, K. K.; Prasad, M. V. R.; Kanagasabapathy, K. V.
2015-02-01
Anthropogenic activities increase the accumulation of heavy metals in the soil environment. Soil pollution significantly reduces environmental quality and affects the human health. In the present study soil samples were collected at different locations of Yelagiri Hills, Tamilnadu, India for heavy metal analysis. The samples were analyzed for twelve selected heavy metals (Mg, Al, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni and Zn) using energy dispersive X-ray fluorescence (EDXRF) spectroscopy. Heavy metals concentration in soil were investigated using enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF) and pollution load index (PLI) to determine metal accumulation, distribution and its pollution status. Heavy metal toxicity risk was assessed using soil quality guidelines (SQGs) given by target and intervention values of Dutch soil standards. The concentration of Ni, Co, Zn, Cr, Mn, Fe, Ti, K, Al, Mg were mainly controlled by natural sources. Multivariate statistical methods such as correlation matrix, principal component analysis and cluster analysis were applied for the identification of heavy metal sources (anthropogenic/natural origin). Geo-statistical methods such as kirging identified hot spots of metal contamination in road areas influenced mainly by presence of natural rocks.
Xu, Selene Yue; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki
2016-07-10
Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.
Kulesz, Paulina A.; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M.; Francis, David J.
2015-01-01
Objective Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Method Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product moment correlation was compared with four robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator Results All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Conclusions Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PMID:25495830
A statistical and experimental approach for assessing the preservation of plant lipids in soil
NASA Astrophysics Data System (ADS)
Mueller, K. E.; Eissenstat, D. M.; Oleksyn, J.; Freeman, K. H.
2011-12-01
Plant-derived lipids contribute to stable soil organic matter, but further interpretations of their abundance in soils are limited because the factors that control lipid preservation are poorly understood. Using data from a long-term field experiment and simple statistical models, we provide novel constraints on several predictors of the concentration of hydrolyzable lipids in forest mineral soils. Focal lipids included common monomers of cutin, suberin, and plant waxes present in tree leaves and roots. Soil lipid concentrations were most strongly influenced by the concentrations of lipids in leaves and roots of the overlying trees, but were also affected by the type of lipid (e.g. alcohols vs. acids), lipid chain length, and whether lipids originated in leaves or roots. Collectively, these factors explained ~80% of the variation in soil lipid concentrations beneath 11 different tree species. In order to use soil lipid analyses to test and improve conceptual models of soil organic matter stabilization, additional studies that provide experimental and quantitative (i.e. statistical) constraints on plant lipid preservation are needed.
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist.
A Statistical Ontology-Based Approach to Ranking for Multiword Search
ERIC Educational Resources Information Center
Kim, Jinwoo
2013-01-01
Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…
Graph-based and statistical approaches for detecting spectrally variable target materials
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.; Theiler, James
2016-05-01
In discriminating target materials from background clutter in hyperspectral imagery, one must contend with variability in both. Most algorithms focus on the clutter variability, but for some materials there is considerable variability in the spectral signatures of the target. This is especially the case for solid target materials, whose signatures depend on morphological properties (particle size, packing density, etc.) that are rarely known a priori. In this paper, we investigate detection algorithms that explicitly take into account the diversity of signatures for a given target. In particular, we investigate variable target detectors when applied to new representations of the hyperspectral data: a manifold learning based approach, and a residual based approach. The graph theory and manifold learning based approach incorporates multiple spectral signatures of the target material of interest; this is built upon previous work that used a single target spectrum. In this approach, we first build an adaptive nearest neighbors (ANN) graph on the data and target spectra, and use a biased locally linear embedding (LLE) transformation to perform nonlinear dimensionality reduction. This biased transformation results in a lower-dimensional representation of the data that better separates the targets from the background. The residual approach uses an annulus based computation to represent each pixel after an estimate of the local background is removed, which suppresses local backgrounds and emphasizes the target-containing pixels. We will show detection results in the original spectral space, the dimensionality-reduced space, and the residual space, all using subspace detectors: ranked spectral angle mapper (rSAM), subspace adaptive matched filter (ssAMF), and subspace adaptive cosine/coherence estimator (ssACE). Results of this exploratory study will be shown on a ground-truthed hyperspectral image with variable target spectra and both full and mixed pixel targets.
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
A Unified Approach to Conformational Statistics of Classical Polymer and Polypeptide Models
Kim, Jin Seob; Chirikjian, Gregory S.
2010-01-01
We present a unified method to generate conformational statistics which can be applied to any of the classical discrete-chain polymer models. The proposed method employs the concepts of Fourier transform and generalized convolution for the group of rigid-body motions in order to obtain probability density functions of chain end-to-end distance. In this paper, we demonstrate the proposed method with three different cases: the freely-rotating model, independent energy model, and interdependent pairwise energy model (the last two are also well-known as the Rotational Isomeric State model). As for numerical examples, for simplicity, we assume homogeneous polymer chains. For the freely-rotating model, we verify the proposed method by comparing with well-known closed-form results for mean-squared end-to-end distance. In the interdependent pairwise energy case, we take polypeptide chains such as polyalanine and polyvaline as examples. PMID:20165562
Quantum-statistical T-matrix approach to line broadening of hydrogen in dense plasmas
Lorenzen, Sonja; Wierling, August; Roepke, Gerd; Reinholz, Heidi; Zammit, Mark C.; Fursa, Dmitry V.; Bray, Igor
2010-10-29
The electronic self-energy {Sigma}{sup e} is an important input in a quantum-statistical theory for spectral line profile calculations. It describes the influence of plasma electrons on bound state properties. In dense plasmas, the effect of strong, i.e. close, electron-emitter collisions can be considered by three-particle T-matrix diagrams. These digrams are approximated with the help of an effective two-particle T-matrix, which is obtained from convergent close-coupling calculations with Debye screening. A comparison with other theories is carried out for the 2p level of hydrogen at k{sub B}T = 1 eV and n{sub e} = 2{center_dot}10{sup 23} m{sup -3}, and results are given for n{sub e} = 1{center_dot}10{sup 25} m{sup -3}.
Mohajeri, Leila; Aziz, Hamidi Abdul; Isa, Mohamed Hasnain; Zahed, Mohammad Ali
2010-02-01
This work studied the bioremediation of weathered crude oil (WCO) in coastal sediment samples using central composite face centered design (CCFD) under response surface methodology (RSM). Initial oil concentration, biomass, nitrogen and phosphorus concentrations were used as independent variables (factors) and oil removal as dependent variable (response) in a 60 days trial. A statistically significant model for WCO removal was obtained. The coefficient of determination (R(2)=0.9732) and probability value (P<0.0001) demonstrated significance for the regression model. Numerical optimization based on desirability function were carried out for initial oil concentration of 2, 16 and 30 g per kg sediment and 83.13, 78.06 and 69.92 per cent removal were observed respectively, compare to 77.13, 74.17 and 69.87 per cent removal for un-optimized results.
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
A copula approach on the dynamics of statistical dependencies in the US stock market
NASA Astrophysics Data System (ADS)
Münnix, Michael C.; Schäfer, Rudi
2011-11-01
We analyze the statistical dependence structure of the S&P 500 constituents in the 4-year period from 2007 to 2010 using intraday data from the New York Stock Exchange’s TAQ database. Instead of using a given parametric copula with a predetermined shape, we study the empirical pairwise copula directly. We find that the shape of this copula resembles the Gaussian copula to some degree, but exhibits a stronger tail dependence, for both correlated and anti-correlated extreme events. By comparing the tail dependence dynamically to the market’s average correlation level as a commonly used quantity we disclose the average level of error of the Gaussian copula, which is implied in the calculation of many correlation coefficients.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ∼ 2°, than those from the three empirical models with averaged errors > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
NASA Astrophysics Data System (ADS)
Paxian, A.; Hertig, E.; Seubert, S.; Vogt, G.; Jacobeit, J.; Paeth, H.
2015-02-01
The Mediterranean area is strongly vulnerable to future changes in temperature and precipitation, particularly concerning extreme events, and has been identified as a climate change hot spot. This study performs a comprehensive investigation of present-day and future Mediterranean precipitation extremes based on station data, gridded observations and simulations of the regional climate model (REMO) driven by the coupled global general circulation model ECHAM5/MPI-OM. Extreme value estimates from different statistical methods—quantile-based indices, generalized pareto distribution (GPD) based return values and data from a weather generator—are compared and evaluated. Dynamical downscaling reveals improved small-scale topographic structures and more realistic higher rainfall totals and extremes over mountain ranges and in summer. REMO tends to overestimate gridded observational data in winter but is closer to local station information. The dynamical-statistical weather generator provides virtual station rainfall from gridded REMO data that overcomes typical discrepancies between area-averaged model rainfall and local station information, e.g. overestimated numbers of rainy days and underestimated extreme intensities. Concerning future rainfall amount, strong summer and winter drying over the northern and southern Mediterranean, respectively, is confronted with winter wetting over the northern part. In contrast, precipitation extremes tend to increase in even more Mediterranean areas, implying regions with decreasing totals but intensifying extremes, e.g. southern Europe and Turkey in winter and the Balkans in summer. The GPD based return values reveal slightly larger regions of increasing rainfall extremes than quantile-based indices, and the virtual stations from the weather generator show even stronger increases.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Statistical Approaches to Assess the Effects of Disease on Neurocognitive Function Over Time
Bergemann, Tracy L; Bangirana, Paul; Boivin, Michael J; Connett, John E; Giordani, Bruno J; John, Chandy C
2013-01-01
Introduction Assessment of the effects of disease on neurocognitive outcomes in children over time presents several challenges. These challenges are particularly pronounced when conducting studies in low-income countries, where standardization and validation is required for tests developed originally in high-income countries. We present a statistical methodology to assess multiple neurocognitive outcomes over time. We address the standardization and adjustment for age in neurocognitive testing, present a statistical methodology for development of a global neurocognitive score, and assess changes in individual and global neurocognitive scores over time in a cohort of children with cerebral malaria. Methods Ugandan children with cerebral malaria (CM, N = 44), uncomplicated malaria (UM, N = 54) and community controls (N = 89) were assessed by cognitive tests of working memory, executive attention and tactile learning at 0, 3, 6 and 24 months after recruitment. Tests were previously developed and validated for the local area. Test scores were adjusted for age, and a global score was developed based on the controls that combined the assessments of impairment in each neurocognitive domain. Global normalized Z-scores were computed for each of the three study groups. Model-based tests compare the Z-scores between groups. Results We found that continuous Z-scores gave more powerful conclusions than previous analyses of the dataset. For example, at all four time points, children with CM had significantly lower global Z-scores than controls and children with UM. Our methods also provide more detailed descriptions of longitudinal trends. For example, the Z-scores of children with CM improved from initial testing to 3 months, but remained at approximately the same level below those of controls or children with UM from 3 to 24 months. Our methods for combining scores are more powerful than tests of individual cognitive domains, as testing of the individual domains revealed
Mapping permeability in low-resolution micro-CT images: A multiscale statistical approach
NASA Astrophysics Data System (ADS)
Botha, Pieter W. S. K.; Sheppard, Adrian P.
2016-06-01
We investigate the possibility of predicting permeability in low-resolution X-ray microcomputed tomography (µCT). Lower-resolution whole core images give greater sample coverage and are therefore more representative of heterogeneous systems; however, the lower resolution causes connecting pore throats to be represented by intermediate gray scale values and limits information on pore system geometry, rendering such images inadequate for direct permeability simulation. We present an imaging and computation workflow aimed at predicting absolute permeability for sample volumes that are too large to allow direct computation. The workflow involves computing permeability from high-resolution µCT images, along with a series of rock characteristics (notably open pore fraction, pore size, and formation factor) from spatially registered low-resolution images. Multiple linear regression models correlating permeability to rock characteristics provide a means of predicting and mapping permeability variations in larger scale low-resolution images. Results show excellent agreement between permeability predictions made from 16 and 64 µm/voxel images of 25 mm diameter 80 mm tall core samples of heterogeneous sandstone for which 5 µm/voxel resolution is required to compute permeability directly. The statistical model used at the lowest resolution of 64 µm/voxel (similar to typical whole core image resolutions) includes open pore fraction and formation factor as predictor characteristics. Although binarized images at this resolution do not completely capture the pore system, we infer that these characteristics implicitly contain information about the critical fluid flow pathways. Three-dimensional permeability mapping in larger-scale lower resolution images by means of statistical predictions provides input data for subsequent permeability upscaling and the computation of effective permeability at the core scale.
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues.
3D geometry analysis of the medial meniscus--a statistical shape modeling approach.
Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N
2014-10-01
The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence
Comparison of a Traditional Probabilistic Risk Assessment Approach with Advanced Safety Analysis
Smith, Curtis L; Mandelli, Diego; Zhegang Ma
2014-11-01
As part of the Light Water Sustainability Program (LWRS) [1], the purpose of the Risk Informed Safety Margin Characterization (RISMC) [2] Pathway research and development (R&D) is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” (SBO) wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario. We also describe our approach we are using to represent advanced flooding analysis.
Advanced neuroprotection for brain ischemia: an alternative approach to minimize stroke damage.
Ayuso, Maria Irene; Montaner, Joan
2015-01-01
Despite decades of research on neuroprotectants in the fight against ischemic stroke, no successful results have been obtained and new alternative approaches are urgently needed. Translation of effective candidate drugs in experimental studies to patients has systematically failed. However, some of those treatments or neuroprotectant diets which demonstrated only beneficial effects if given before (but not after) ischemia induction and discarded for conventional neuroprotection, could be rescued in order to apply an 'advanced neuroprotection strategy' (ADNES). Herein, the authors discuss how re-profiling those neuroprotective candidate drugs and diets with the best potential, some of which are mentioned in this article as an ADNES, may be a good approach for developing successful treatments that protect the brain against ischemic damage. This novel approach would try to protect the brain of patients who are at high risk of suffering a stroke, before damage occurs, in order to minimize brain injury by having the neuroprotectant drug or diet 'on board' if unfortunately stroke occurs.
Krusinska, E.; Babic, A.; Chowdhury, S.; Wigertz, O.; Bodemar, G.; Mathiesen, U.
1991-01-01
In clinical research data is often studied by a particular method without previous analysis of quality or semantic contents which could link clinical database and data analytical (e.g. statistical) procedures. In order to avoid bias caused by this situation, we propose that the analysis of medical data should be divided into two main steps. In the first one we concentrate on conducting the quality, semantic and structure analyses. In the second step our aim is to build an appropriate dictionary of data analysis methods for further knowledge extraction. Methods like robust statistical techniques, procedures for mixed continuous and discrete data, fuzzy linguistic approach, machine learning and neural networks can be included. The results may be evaluated both using test samples and applying other relevant data-analytical techniques to the particular problem under the study. PMID:1807621
Vickers, Andrew J
2005-01-01
Analysis of variance (ANOVA) is a statistical method that is widely used in the psychosomatic literature to analyze the results of randomized trials, yet ANOVA does not provide an estimate for the difference between groups, the key variable of interest in a randomized trial. Although the use of ANOVA is frequently justified on the grounds that a trial incorporates more than two groups, the hypothesis tested by ANOVA for these trials--"Are all groups equivalent?"--is often scientifically uninteresting. Regression methods are not only applicable to trials with many groups, but can be designed to address specific questions arising from the study design. ANOVA is also frequently used for trials with repeated measures, but the consequent reporting of "group effects," "time effects," and "time-by-group interactions" is a distraction from statistics of clinical and scientific value. Given that ANOVA is easily misapplied in the analysis of randomized trials, alternative approaches such as regression methods should be considered in preference.
Moura, Lidia Mvr; Westover, M Brandon; Kwasnik, David; Cole, Andrew J; Hsu, John
2017-01-01
The elderly population faces an increasing number of cases of chronic neurological conditions, such as epilepsy and Alzheimer's disease. Because the elderly with epilepsy are commonly excluded from randomized controlled clinical trials, there are few rigorous studies to guide clinical practice. When the elderly are eligible for trials, they either rarely participate or frequently have poor adherence to therapy, thus limiting both generalizability and validity. In contrast, large observational data sets are increasingly available, but are susceptible to bias when using common analytic approaches. Recent developments in causal inference-analytic approaches also introduce the possibility of emulating randomized controlled trials to yield valid estimates. We provide a practical example of the application of the principles of causal inference to a large observational data set of patients with epilepsy. This review also provides a framework for comparative-effectiveness research in chronic neurological conditions.
Moura, Lidia MVR; Westover, M Brandon; Kwasnik, David; Cole, Andrew J; Hsu, John
2017-01-01
The elderly population faces an increasing number of cases of chronic neurological conditions, such as epilepsy and Alzheimer’s disease. Because the elderly with epilepsy are commonly excluded from randomized controlled clinical trials, there are few rigorous studies to guide clinical practice. When the elderly are eligible for trials, they either rarely participate or frequently have poor adherence to therapy, thus limiting both generalizability and validity. In contrast, large observational data sets are increasingly available, but are susceptible to bias when using common analytic approaches. Recent developments in causal inference-analytic approaches also introduce the possibility of emulating randomized controlled trials to yield valid estimates. We provide a practical example of the application of the principles of causal inference to a large observational data set of patients with epilepsy. This review also provides a framework for comparative-effectiveness research in chronic neurological conditions. PMID:28115873
A Parameterization Invariant Approach to the Statistical Estimation of the CKM Phase alpha
Morris, Robin D.; Cohen-Tanugi, Johann; /SLAC
2008-04-14
In contrast to previous analyses, we demonstrate a Bayesian approach to the estimation of the CKM phase {alpha} that is invariant to parameterization. We also show that in addition to computing the marginal posterior in a Bayesian manner, the distribution must also be interpreted from a subjective Bayesian viewpoint. Doing so gives a very natural interpretation to the distribution. We also comment on the effect of removing information about {beta}{sup 00}.
Prediction of free air space in initial composting mixtures by a statistical design approach.
Soares, Micaela A R; Quina, Margarida J; Quinta-Ferreira, Rosa
2013-10-15
Free air space (FAS) is a physical parameter that can play an important role in composting processes to maintain favourable aerobic conditions. Aiming to predict the FAS of initial composting mixtures, specific materials proportions ranged from 0 to 1 were tested for a case study comprising industrial potato peel, which is characterized by low air void volume, thus requiring additional components for its composting. The characterization and prediction of FAS for initial mixtures involving potato peel, grass clippings and rice husks (set A) or sawdust (set B) was accomplished by means of an augmented simplex-centroid mixture design approach. The experimental data were fitted to second order Scheffé polynomials. Synergistic or antagonistic effects of mixture proportions in the FAS response were identified from the surface and response trace plots in the FAS response. Moreover, a good agreement was achieved between the model predictions and supplementary experimental data. Moreover, theoretical and empirical approaches for estimating FAS available in literature were compared with the predictions generated by the mixture design approach. This study demonstrated that the mixture design methodology can be a valuable tool to predict the initial FAS of composting mixtures, specifically in making adjustments to improve composting processes containing primarily potato peel.
Yamazawa, Akira; Date, Yasuhiro; Ito, Keijiro; Kikuchi, Jun
2014-03-01
Microbial ecosystems are typified by diverse microbial interactions and competition. Consequently, the microbial networks and metabolic dynamics of bioprocesses catalyzed by these ecosystems are highly complex, and their visualization is regarded as essential to bioengineering technology and innovation. Here we describe a means of visualizing the variants in a microbial community and their metabolic profiles. The approach enables previously unidentified bacterial functions in the ecosystems to be elucidated. We investigated the anaerobic bioremediation of chlorinated ethene in a soil column experiment as a case study. Microbial community and dechlorination profiles in the ecosystem were evaluated by denaturing gradient gel electrophoresis (DGGE) fingerprinting and gas chromatography, respectively. Dechlorination profiles were obtained from changes in dechlorination by microbial community (evaluated by data mining methods). Individual microbes were then associated with their dechlorination profiles by heterogenous correlation analysis. Our correlation-based visualization approach enables deduction of the roles and functions of bacteria in the dechlorination of chlorinated ethenes. Because it estimates functions and relationships between unidentified microbes and metabolites in microbial ecosystems, this approach is proposed as a control-logic tool by which to understand complex microbial processes.
NASA Astrophysics Data System (ADS)
Seçgin, Abdullah
2013-01-01
Statistical energy analysis (SEA) parameters such as average modal spacing, coupling loss factor and input power are numerically determined for point connected, directly coupled symmetrically laminated composite plates using a modal-based approach. The approach is an enhancement of classical wave transmission formula. Unlike most of the existing numerical or experimental techniques, the approach uses uncoupled plate modal information and treats substructure by means of averaged modal impedances. The procedure introduced here is verified using analytical definitions of infinite orthotropic plates which physically resemble to laminated plates for (under) specific conditions, and is tested by performing experimental power injection method (PIM) for an actual, right-angled composite structure. In the development process, force and moment transmissions are individually considered in order to be consistent with analytical formulations. Modal information of composite plates is statistically evaluated by the discrete singular convolution method with random boundary conditions. Proposed methodology not only provides an efficient use of SEA method in high frequency vibration analysis of composite structures, but also enhances SEA accuracy in mid frequency region in which conventional SEA fails. Furthermore, the effect of orientation angles of laminations on SEA parameters are also discussed in mid and high frequency regions.
Snyder, Hannah R.; Miyake, Akira; Hankin, Benjamin L.
2015-01-01
Executive function (EF) is essential for successfully navigating nearly all of our daily activities. Of critical importance for clinical psychological science, EF impairments are associated with most forms of psychopathology. However, despite the proliferation of research on EF in clinical populations, with notable exceptions clinical and cognitive approaches to EF have remained largely independent, leading to failures to apply theoretical and methodological advances in one field to the other field and hindering progress. First, we review the current state of knowledge of EF impairments associated with psychopathology and limitations to the previous research in light of recent advances in understanding and measuring EF. Next, we offer concrete suggestions for improving EF assessment. Last, we suggest future directions, including integrating modern models of EF with state of the art, hierarchical models of dimensional psychopathology as well as translational implications of EF-informed research on clinical science. PMID:25859234
Application of the LBB regulatory approach to the steamlines of advanced WWER 1000 reactor
Kiselyov, V.A.; Sokov, L.M.
1997-04-01
The LBB regulatory approach adopted in Russia in 1993 as an extra safety barrier is described for advanced WWER 1000 reactor steamline. The application of LBB concept requires the following additional protections. First, the steamline should be a highly qualified piping, performed in accordance with the applicable regulations and guidelines, carefully screened to verify that it is not subjected to any disqualifying failure mechanism. Second, a deterministic fracture mechanics analysis and leak rate evaluation have been performed to demonstrate that postulated through-wall crack that yields 95 1/min at normal operation conditions is stable even under seismic loads. Finally, it has been verified that the leak detection systems are sufficiently reliable, diverse and sensitive, and that adequate margins exist to detect a through wall crack smaller than the critical size. The obtained results are encouraging and show the possibility of the application of the LBB case to the steamline of advanced WWER 1000 reactor.
Rimayi, Cornelius; Odusanya, David; Mtunzi, Fanyana; Tsoka, Shepherd
2015-01-01
This paper investigates the efficiency of application of four different multivariate calibration techniques, namely matrix-matched internal standard (MMIS), matrix-matched external standard (MMES), solvent-only internal standard (SOIS) and solvent-only external standard (SOES) on the detection and quantification of 20 organochlorine compounds from high, low and blank matrix water sample matrices by Gas Chromatography-Mass Spectrometry (GC-MS) coupled to solid phase extraction (SPE). Further statistical testing, using Statistical Package for the Social Science (SPSS) by applying MANOVA, T-tests and Levene's F tests indicates that matrix composition has a more significant effect on the efficiency of the analytical method than the calibration method of choice. Matrix effects are widely described as one of the major sources of errors in GC-MS multiresidue analysis. Descriptive and inferential statistics proved that the matrix-matched internal standard calibration was the best approach to use for samples of varying matrix composition as it produced the most precise average mean recovery of 87% across all matrices tested. The use of an internal standard calibration overall produced more precise total recoveries than external standard calibration, with mean values of 77% and 64% respectively. The internal standard calibration technique produced a particularly high overall standard deviation of 38% at 95% confidence level indicating that it is less robust than the external standard calibration method which had an overall standard error of 32% at 95% confidence level. Overall, the matrix-matched external standard calibration proved to be the best calibration approach for analysis of low matrix samples which consisted of the real sample matrix as it had the most precise recovery of 98% compared to other calibration approaches for the low-matrix samples.
Forecast of natural aquifer discharge using a data-driven, statistical approach.
Boggs, Kevin G; Van Kirk, Rob; Johnson, Gary S; Fairley, Jerry P
2014-01-01
In the Western United States, demand for water is often out of balance with limited water supplies. This has led to extensive water rights conflict and litigation. A tool that can reliably forecast natural aquifer discharge months ahead of peak water demand could help water practitioners and managers by providing advanced knowledge of potential water-right mitigation requirements. The timing and magnitude of natural aquifer discharge from the Eastern Snake Plain Aquifer (ESPA) in southern Idaho is accurately forecast 4 months ahead of the peak water demand, which occurs annually in July. An ARIMA time-series model with exogenous predictors (ARIMAX model) was used to develop the forecast. The ARIMAX model fit to a set of training data was assessed using Akaike's information criterion to select the optimal model that forecasts aquifer discharge, given the previous year's discharge and values of the predictor variables. Model performance was assessed by application of the model to a validation subset of data. The Nash-Sutcliffe efficiency for model predictions made on the validation set was 0.57. The predictor variables used in our forecast represent the major recharge and discharge components of the ESPA water budget, including variables that reflect overall water supply and important aspects of water administration and management. Coefficients of variation on the regression coefficients for streamflow and irrigation diversions were all much less than 0.5, indicating that these variables are strong predictors. The model with the highest AIC weight included streamflow, two irrigation diversion variables, and storage.
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2010-02-01
Statistical physics and information theory is applied to the clinical chemistry measurements present in a patient database containing 2.5 million patients' data over a 20-year period. Despite the seemingly naive approach of aggregating all patients over all times (with respect to particular clinical chemistry measurements), both a diurnal signal in the decay of the time-delayed mutual information and the presence of two sub-populations with differing health are detected. This provides a proof in principle that the highly fragmented data in electronic health records has potential for being useful in defining disease and human phenotypes.
Albers, D. J.; Hripcsak, George
2010-01-01
Statistical physics and information theory is applied to the clinical chemistry measurements present in a patient database containing 2.5 million patients’ data over a 20-year period. Despite the seemingly naive approach of aggregating all patients over all times (with respect to particular clinical chemistry measurements), both a diurnal signal in the decay of the time-delayed mutual information and the presence of two sub-populations with differing health are detected. This provides a proof in principle that the highly fragmented data in electronic health records has potential for being useful in defining disease and human phenotypes. PMID:20544004
A statistical approach to optimization of alumina etching in a high density plasma
Li Xiao; Gupta, Subhadra; Highsmith, Alton; Paranjpe, Ajit; Rook, Katrina
2008-08-01
Inductively coupled plasma (ICP) reactive ion etching of Al{sub 2}O{sub 3} with fluorine-based gas chemistry in a high density plasma reactor was carried out in an initial investigation aimed at data storage applications. A statistical design of experiments was implemented to optimize etch performance with respect to process variables such as ICP power, platen power, direct current (dc) bias, and pressure. Both soft photoresist masks and hard metal masks were investigated in terms of etch selectivity and surface properties. The reverse power dependence of dc bias on the ratio of ICP to platen power was elucidated. Etch mechanisms in terms of physical and ion enhanced chemical etchings were discussed. The F-based chemistry greatly enhances the etch rate of alumina compared to purely physical processes such as ion milling. Etch rates as high as 150 nm/min were achieved using this process. A practical process window was developed for high etch rates, with reasonable selectivity to hard masks, with the desired profile, and with low substrate bias for minimal damage.
Zumberge, J.E.
1987-06-01
The distributions of eight tricyclic and eight pentacyclic terpanes were determined for 216 crude oils located worldwide with subsequent simultaneous RQ-mode factor analysis and stepwise discriminate analysis for the purpose of predicting source rock features or depositional environments. Five categories of source rock beds are evident: nearshore marine; deeper-water marine; lacustrine; phosphatic-rich source beds; and Ordovician age source rocks. The first two factors of the RQ-mode factor analysis describe 45 percent of the variation in the data set; the tricyclic terpanes appear to be twice as significant as pentacyclic terpanes in determining the variation among samples. Lacustrine oils are characterized by greater relative abundances of C/sub 21/ diterpane and gammacerane; nearshore marine sources by C/sub 19/ and C/sub 20/ diterpanes and oleanane; deeper-water marine facies by C/sub 24/ and C/sub 25/ tricyclic and C/sub 31/ plus C/sub 32/ extended hopanes; and Ordovician age oils by C/sub 27/ and C/sub 29/ pentacyclic terpanes. Although thermal maturity trends can be observed in factor space, the trends to do necessarily obscure the source rock interpretations. Also, since bacterial degradation of crude oils rarely affects tricyclic terpanes, biodegraded oils can be used in predicting source rock features. The precision to which source rock depositional environments are determined might be increased with the addition of other biomarker and stable isotope data using multivariate statistical techniques.
NASA Astrophysics Data System (ADS)
Sherwood, S. C.; Fuchs, D.; Bony, S.; Jean-Louis, D.
2014-12-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present an observation-based study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
Register Control of Roll-to-Roll Printing System Based on Statistical Approach
NASA Astrophysics Data System (ADS)
Kim, Chung Hwan; You, Ha-Il; Jo, Jeongdai
2013-05-01
One of the most important requirements when using roll-to-roll printing equipment for multilayer printing is register control. Because multilayer printing requires a printing accuracy of several microns to several tens of microns, depending on the devices and their sizes, precise register control is required. In general, the register errors vary with time, even for one revolution of the plate cylinder. Therefore, more information about the register errors in one revolution of the plate cylinder is required for more precise register control, which is achieved by using multiple register marks in a single revolution of the plate cylinder. By using a larger number of register marks, we can define the value of the register error as a statistical value rather than a single one. The register errors measured from an actual roll-to-roll printing system consist of a linearly varying term, a static offset term, and small fluctuations. The register errors resulting from the linearly varying term and the offset term are compensated for by the velocity and phase control of the plate cylinders, based on the calculated slope and offset of the register errors, which are obtained by the curve-fitting of the data set of register errors. We show that even with the slope and offset compensation of the register errors, a register control performance of within 20 µm can be achieved.
A statistical approach to estimate the LYAPUNOV spectrum in disc brake squeal
NASA Astrophysics Data System (ADS)
Oberst, S.; Lai, J. C. S.
2015-01-01
The estimation of squeal propensity of a brake system from the prediction of unstable vibration modes using the linear complex eigenvalue analysis (CEA) in the frequency domain has its fair share of successes and failures. While the CEA is almost standard practice for the automotive industry, time domain methods and the estimation of LYAPUNOV spectra have not received much attention in brake squeal analyses. One reason is the challenge in estimating the true LYAPUNOV exponents and their discrimination against spurious ones in experimental data. A novel method based on the application of the ECKMANN-RUELLE matrices is proposed here to estimate LYAPUNOV exponents by using noise in a statistical procedure. It is validated with respect to parameter variations and dimension estimates. By counting the number of non-overlapping confidence intervals for LYAPUNOV exponent distributions obtained by moving a window of increasing size over bootstrapped same-length estimates of an observation function, a dispersion measure's width is calculated and fed into a BAYESIAN beta-binomial model. Results obtained using this method for benchmark models of white and pink noise as well as the classical HENON map indicate that true LYAPUNOV exponents can be isolated from spurious ones with high confidence. The method is then applied to accelerometer and microphone data obtained from brake squeal tests. Estimated LYAPUNOV exponents indicate that the pad's out-of-plane vibration behaves quasi-periodically on the brink to chaos while the microphone's squeal signal remains periodic.
Johnson, Eric D; Tubau, Elisabet
2016-09-27
Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.
NASA Astrophysics Data System (ADS)
Shin, W.; Ryu, J.; Lee, K.; Park, Y.; Chung, G.
2008-12-01
Seasonal and spatial variation in water quality and contaminant sources were investigated in six major rivers in South Korea that vary widely in drainage area and length. The contents of dissolved loads in the rivers varied seasonally, and some dissolved ions such as Cl- and NO3- showed large spatial differences in all of the rivers. The water type changed from Ca-HCO3 in the upper reaches to Na-HCO3-SO4 in the lower reaches, probably because of anthropogenic contamination. Compared with the Sumjin and Mankyung rivers, which flow mainly through forested areas with limited agricultural activity, the other four rivers, which flow through agricultural and urban areas, registered much higher Cl- and NO3- concentrations. Statistical analyses showed that this seasonal and spatial variation occurs in all of the rivers and that Cl- and NO3- originate from different sources. The nitrogen and oxygen isotopes of dissolved nitrate indicated that the rivers are significantly affected by manure, sewage, or both.
A statistical-based approach for acoustic tomography of the atmosphere.
Kolouri, Soheil; Azimi-Sadjadi, Mahmood R; Ziemann, Astrid
2014-01-01
Acoustic travel-time tomography of the atmosphere is a nonlinear inverse problem which attempts to reconstruct temperature and wind velocity fields in the atmospheric surface layer using the dependence of sound speed on temperature and wind velocity fields along the propagation path. This paper presents a statistical-based acoustic travel-time tomography algorithm based on dual state-parameter unscented Kalman filter (UKF) which is capable of reconstructing and tracking, in time, temperature, and wind velocity fields (state variables) as well as the dynamic model parameters within a specified investigation area. An adaptive 3-D spatial-temporal autoregressive model is used to capture the state evolution in the UKF. The observations used in the dual state-parameter UKF process consist of the acoustic time of arrivals measured for every pair of transmitter/receiver nodes deployed in the investigation area. The proposed method is then applied to the data set collected at the Meteorological Observatory Lindenberg, Germany, as part of the STINHO experiment, and the reconstruction results are presented.
Haaland, Ben; Min, Wanli; Qian, Peter Z. G.; Amemiya, Yasuo
2011-01-01
Temperature control for a large data center is both important and expensive. On the one hand, many of the components produce a great deal of heat, and on the other hand, many of the components require temperatures below a fairly low threshold for reliable operation. A statistical framework is proposed within which the behavior of a large cooling system can be modeled and forecast under both steady state and perturbations. This framework is based upon an extension of multivariate Gaussian autoregressive hidden Markov models (HMMs). The estimated parameters of the fitted model provide useful summaries of the overall behavior of and relationships within the cooling system. Predictions under system perturbations are useful for assessing potential changes and improvements to be made to the system. Many data centers have far more cooling capacity than necessary under sensible circumstances, thus resulting in energy inefficiencies. Using this model, predictions for system behavior after a particular component of the cooling system is shut down or reduced in cooling power can be generated. Steady-state predictions are also useful for facility monitors. System traces outside control boundaries flag a change in behavior to examine. The proposed model is fit to data from a group of air conditioners within an enterprise data center from the IT industry. The fitted model is examined, and a particular unit is found to be underutilized. Predictions generated for the system under the removal of that unit appear very reasonable. Steady-state system behavior also is predicted well. PMID:22076026
Spatio-statistical analysis of temperature fluctuation using Mann-Kendall and Sen's slope approach
NASA Astrophysics Data System (ADS)
Atta-ur-Rahman; Dawood, Muhammad
2017-02-01
This article deals with the spatio-statistical analysis of temperature trend using Mann-Kendall trend model (MKTM) and Sen's slope estimator (SSE) in the eastern Hindu Kush, north Pakistan. The climate change has a strong relationship with the trend in temperature and resultant changes in rainfall pattern and river discharge. In the present study, temperature is selected as a meteorological parameter for trend analysis and slope magnitude. In order to achieve objectives of the study, temperature data was collected from Pakistan Meteorological Department for all the seven meteorological stations that falls in the eastern Hindu Kush region. The temperature data were analysed and simulated using MKTM, whereas for the determination of temperature trend and slope magnitude SSE method have been applied to exhibit the type of fluctuations. The analysis reveals that a positive (increasing) trend in mean maximum temperature has been detected for Chitral, Dir and Saidu Sharif met stations, whereas, negative (decreasing) trend in mean minimum temperature has been recorded for met station Saidu Sharif and Timergara. The analysis further reveals that the concern variation in temperature trend and slope magnitude is attributed to climate change phenomenon in the region.
A statistical approach to determining criticality of residual host cell DNA.
Yang, Harry; Wei, Ziping; Schenerman, Mark
2015-01-01
We propose a method for determining the criticality of residual host cell DNA, which is characterized through two attributes, namely the size and amount of residual DNA in biopharmaceutical product. By applying a mechanistic modeling approach to the problem, we establish the linkage between residual DNA and product safety measured in terms of immunogenicity, oncogenicity, and infectivity. Such a link makes it possible to establish acceptable ranges of residual DNA size and amount. Application of the method is illustrated through two real-life examples related to a vaccine manufactured in Madin Darby Canine Kidney cell line and a monoclonal antibody using Chinese hamster ovary (CHO) cell line as host cells.
Modified approach for extraperitoneal laparoscopic staging for locally advanced cervical cancer.
Gil-Moreno, A; Maffuz, A; Díaz-Feijoo, B; Puig, O; Martínez-Palones, J M; Pérez, A; García, A; Xercavins, J
2007-12-01
Describe a modified approach to the technique for staging laparoscopic extraperitoneal aortic and common iliac lymph node dissection for locally advanced cervical cancer.Retrospective, nonrandomized clinical study. (Canadian Task Force classification II-2), setting in an acute-care, teaching hospital. Thirty-six patients with locally advanced cervical cancer underwent laparoscopic surgical staging via extraperitoneal approach with the conventional or the modified technique from August 2001 through September 2004. Clinical outcomes in 23 patients who were operated on with the conventional technique using index finger for first trocar entrance; 12 patients with the modified technique using direct trocar entrance, were compared. One patient was excluded due to peritoneal carcinomatosis. Technique, baseline characteristics, histopathologic variables and surgical outcome were measured. There were no significant differences in patients basal characteristics on comparative analysis between conventional and modified technique. With our proposed modified technique, we obtained a reduced surgical procedure duration and blood loss. The proposed modified surgical technique offers some advantages, is an easier approach because the parietal pelvic peritoneum is elastic and this helps to avoid its disruption at time of trocar insertion, size of incision is shorter, we achieved no CO2 leak through the trocar orifice, and wound suture is fast and simple.
Harrison, Jay M; Breeze, Matthew L; Harrigan, George G
2011-08-01
Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation.
Improving polygenic risk prediction from summary statistics by an empirical Bayes approach
So, Hon-Cheong; Sham, Pak C.
2017-01-01
Polygenic risk scores (PRS) from genome-wide association studies (GWAS) are increasingly used to predict disease risks. However some included variants could be false positives and the raw estimates of effect sizes from them may be subject to selection bias. In addition, the standard PRS approach requires testing over a range of p-value thresholds, which are often chosen arbitrarily. The prediction error estimated from the optimized threshold may also be subject to an optimistic bias. To improve genomic risk prediction, we proposed new empirical Bayes approaches to recover the underlying effect sizes and used them as weights to construct PRS. We applied the new PRS to twelve cardio-metabolic traits in the Northern Finland Birth Cohort and demonstrated improvements in predictive power (in R2) when compared to standard PRS at the best p-value threshold. Importantly, for eleven out of the twelve traits studied, the predictive performance from the entire set of genome-wide markers outperformed the best R2 from standard PRS at optimal p-value thresholds. Our proposed methodology essentially enables an automatic PRS weighting scheme without the need of choosing tuning parameters. The new method also performed satisfactorily in simulations. It is computationally simple and does not require assumptions on the effect size distributions. PMID:28145530
a Statistically Dependent Approach for the Monthly Rainfall Forecastfrom One Point Observations
NASA Astrophysics Data System (ADS)
Pucheta, J.; Patiño, D.; Kuchen, B.
In this work an adaptive linear filter model in a autoregressive moving average (ARMA) topology for forecasting time series is presented. The time series are composed by observations of the accumulative rainfall every month during several years. The learning rule used to adjust the filter coefficients is mainly based on the gradient-descendent method. In function of the long and short term stochastic dependence of the time series, we propose an on-line heuristic law to set the training process and to modify the filter topology. The input patterns for the predictor filter are the values of the time series after applying a time-delay operator. Hence, the filter’s output will tend to approximate the current value available from the data series. The approach is tested over a time series obtained from measures of the monthly accumulative rainfall from La Perla, Cordoba, Argentina. The performance of the presented approach is shown by forecasting the following 18 months from a hypothetical actual time for four time series of 102 data length.
a Statistically Dependent Approach for the Monthly Rainfall Forecastfrom One Point Observations
NASA Astrophysics Data System (ADS)
Pucheta, J.; Patiño, D.; Kuchen, B.
In this work an adaptive linear filter model in a autoregressive moving average (ARMA) topology for forecasting time series is presented. The time series are composed by observations of the accumulative rainfall every month during several years. The learning rule used to adjust the filter coefficients is mainly based on the gradient-descendent method. In function of the long and short term stochastic dependence of the time series, we propose an on-line heuristic law to set the training process and to modify the filter topology. The input patterns for the predictor filter are the values of the time series after applying a time-delay operator. Hence, the filter's output will tend to approximate the current value available from the data series. The approach is tested over a time series obtained from measures of the monthly accumulative rainfall from La Perla, Cordoba, Argentina. The performance of the presented approach is shown by forecasting the following 18 months from a hypothetical actual time for four time series of 102 data length.
Stable Equilibrium Based on Lévy Statistics:A Linear Boltzmann Equation Approach
NASA Astrophysics Data System (ADS)
Barkai, Eli
2004-06-01
To obtain further insight on possible power law generalizations of Boltzmann equilibrium concepts, we consider stochastic collision models. The models are a generalization of the Rayleigh collision model, for a heavy one dimensional particle M interacting with ideal gas particles with a mass m<< M. Similar to previous approaches we assume elastic, uncorrelated, and impulsive collisions. We let the bath particle velocity distribution function to be of general form, namely we do not postulate a specific form of power-law equilibrium. We show, under certain conditions, that the velocity distribution function of the heavy particle is Lévy stable, the Maxwellian distribution being a special case. We demonstrate our results with numerical examples. The relation of the power law equilibrium obtained here to thermodynamics is discussed. In particular we compare between two models: a thermodynamic and an energy scaling approaches. These models yield insight into questions like the meaning of temperature for power law equilibrium, and into the issue of the universality of the equilibrium (i.e., is the width of the generalized Maxwellian distribution functions obtained here, independent of coupling constant to the bath).
A statistical approach to determining energetic outer radiation belt electron precipitation fluxes
NASA Astrophysics Data System (ADS)
Simon Wedlund, Mea; Clilverd, Mark A.; Rodger, Craig J.; Cresswell-Moorcock, Kathy; Cobbett, Neil; Breen, Paul; Danskin, Donald; Spanswick, Emma; Rodriguez, Juan V.
2014-05-01
Subionospheric radio wave data from an Antarctic-Arctic Radiation-Belt (Dynamic) Deposition VLF Atmospheric Research Konsortia (AARDDVARK) receiver located in Churchill, Canada, is analyzed to determine the characteristics of electron precipitation into the atmosphere over the range 3 < L < 7. The study advances previous work by combining signals from two U.S. transmitters from 20 July to 20 August 2010, allowing error estimates of derived electron precipitation fluxes to be calculated, including the application of time-varying electron energy spectral gradients. Electron precipitation observations from the NOAA POES satellites and a ground-based riometer provide intercomparison and context for the AARDDVARK measurements. AARDDVARK radiowave propagation data showed responses suggesting energetic electron precipitation from the outer radiation belt starting 27 July 2010 and lasting ~20 days. The uncertainty in >30 keV precipitation flux determined by the AARDDVARK technique was found to be ±10%. Peak >30 keV precipitation fluxes of AARDDVARK-derived precipitation flux during the main and recovery phase of the largest geomagnetic storm, which started on 4 August 2010, were >105 el cm-2 s-1 sr-1. The largest fluxes observed by AARDDVARK occurred on the dayside and were delayed by several days from the start of the geomagnetic disturbance. During the main phase of the disturbances, nightside fluxes were dominant. Significant differences in flux estimates between POES, AARDDVARK, and the riometer were found after the main phase of the largest disturbance, with evidence provided to suggest that >700 keV electron precipitation was occurring. Currently the presence of such relativistic electron precipitation introduces some uncertainty in the analysis of AARDDVARK data, given the assumption of a power law electron precipitation spectrum.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544
A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems
Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; Shiga, Yoichi P.
2016-10-27
We study independent verification and quantification of fossil fuel (FF) emissions that constitutes a considerable scientific challenge. By coupling atmospheric observations of CO_{2} with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO_{2} concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlying processes affecting FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO_{2} measurements over North America. Also, inversions are performed only for the month of January, as predominance of biospheric CO_{2} signal relative to FF CO_{2} signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m^{-2} s^{-1} between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.
Identification of chilling and heat requirements of cherry trees—a statistical approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Kunz, Achim; Blanke, Michael M.
2013-09-01
Most trees from temperate climates require the accumulation of winter chill and subsequent heat during their dormant phase to resume growth and initiate flowering in the following spring. Global warming could reduce chill and hence hamper the cultivation of high-chill species such as cherries. Yet determining chilling and heat requirements requires large-scale controlled-forcing experiments, and estimates are thus often unavailable. Where long-term phenology datasets exist, partial least squares (PLS) regression can be used as an alternative, to determine climatic requirements statistically. Bloom dates of cherry cv. `Schneiders späte Knorpelkirsche' trees in Klein-Altendorf, Germany, from 24 growing seasons were correlated with 11-day running means of daily mean temperature. Based on the output of the PLS regression, five candidate chilling periods ranging in length from 17 to 102 days, and one forcing phase of 66 days were delineated. Among three common chill models used to quantify chill, the Dynamic Model showed the lowest variation in chill, indicating that it may be more accurate than the Utah and Chilling Hours Models. Based on the longest candidate chilling phase with the earliest starting date, cv. `Schneiders späte Knorpelkirsche' cherries at Bonn exhibited a chilling requirement of 68.6 ± 5.7 chill portions (or 1,375 ± 178 chilling hours or 1,410 ± 238 Utah chill units) and a heat requirement of 3,473 ± 1,236 growing degree hours. Closer investigation of the distinct chilling phases detected by PLS regression could contribute to our understanding of dormancy processes and thus help fruit and nut growers identify suitable tree cultivars for a future in which static climatic conditions can no longer be assumed. All procedures used in this study were bundled in an R package (`chillR') and are provided as Supplementary materials. The procedure was also applied to leaf emergence dates of walnut (cv. `Payne') at Davis, California.
A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems
Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; ...
2016-10-27
We study independent verification and quantification of fossil fuel (FF) emissions that constitutes a considerable scientific challenge. By coupling atmospheric observations of CO2 with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO2 concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlying processes affectingmore » FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO2 measurements over North America. Also, inversions are performed only for the month of January, as predominance of biospheric CO2 signal relative to FF CO2 signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m-2 s-1 between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.« less
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems
NASA Astrophysics Data System (ADS)
Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; Shiga, Yoichi P.
2016-10-01
Independent verification and quantification of fossil fuel (FF) emissions constitutes a considerable scientific challenge. By coupling atmospheric observations of CO2 with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO2 concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlying processes affecting FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO2 measurements over North America. Inversions are performed only for the month of January, as predominance of biospheric CO2 signal relative to FF CO2 signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m-2 s-1 between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.
Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.
García-Mozo, H; Yaezel, L; Oteros, J; Galán, C
2014-03-01
Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into
Comparison of different statistical approaches to evaluate morphometric sperm subpopulations in men
Yániz, Jesús L; Vicente-Fiel, Sandra; Soler, Carles; Recreo, Pilar; Carretero, Teresa; Bono, Araceli; Berné, José M; Santolaria, Pilar
2016-01-01
This study was designed to characterize morphometric sperm subpopulations in normozoospermic men by using different statistical methods and examining their suitability to classify correctly different sperm nuclear morphologies present in human ejaculates. Ejaculates from 21 normozoospermic men were collected for the study. After semen collection and analysis, samples were prepared for morphometric determination. At least 200 spermatozoa per sample were assessed for sperm morphometry by computer-assisted sperm morphometry analysis (CASA-Morph) using fluorescence. Clustering and discriminant procedures were performed to identify sperm subpopulations from the morphometric data obtained. Clustering procedures resulted in the classification of spermatozoa into three morphometric subpopulations (large-round 30.4%, small-round 46.6%, and large-elongated 22.9%). In the second analysis, using discriminant methods, the classification was made independently of size and shape. Three morphological categories according to nuclear size (small <10.90 μm2, intermediate 10.91–13.07 μm2, and large >13.07 μm2) and four categories were defined on 400 canonical cells (100 × 4) from 10 men according to sperm nuclear shape (oval, pyriform, round, and elongated). Thereafter, the resulting classification functions were used to categorize 4200 spermatozoa from 21 men. Differences in the class distribution were observed among men from both clustering and discriminant procedures. It was concluded that the combination of CASA-Morph fluorescence-based technology with multivariate cluster or discriminant analyses provides new information on the description of different morphometric sperm subpopulations in normal individuals, and that important variations in the distribution of morphometric sperm subpopulations may exist between men, with possible functional implications. PMID:27624984
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization
Febvre, G.
1994-10-01
The problem of the lidar equation inversion lies in the fact that it requires a lidar calibration or else a reference value from the studied medium. This paper presents an approach to calibrate the lidar by calculating the constant Ak (lidar constant A multiplied by the ratio of backscatter coefficient to extinction coefficient k). This approach is based on statistical analysis of in situ measurements. This analysis demonstrates that the extinction coefficient has a typical probablility distribution in cirrus clouds. The property of this distribution, as far as the attenuation of laser beam in the cloud, is used as a constraint to calculate the value of Ak. The validity of this method is discussed and results compared with two other inversion methods.
M-health medical video communication systems: an overview of design approaches and recent advances.
Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S
2013-01-01
The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.
Big data mining powers fungal research: recent advances in fission yeast systems biology approaches.
Wang, Zhe
2016-10-11
Biology research has entered into big data era. Systems biology approaches therefore become the powerful tools to obtain the whole landscape of how cell separate, grow, and resist the stresses. Fission yeast Schizosaccharomyces pombe is wonderful unicellular eukaryote model, especially studying its division and metabolism can facilitate to understanding the molecular mechanism of cancer and discovering anticancer agents. In this perspective, we discuss the recent advanced fission yeast systems biology tools, mainly focus on metabolomics profiling and metabolic modeling, protein-protein interactome and genetic interaction network, DNA sequencing and applications, and high-throughput phenotypic screening. We therefore hope this review can be useful for interested fungal researchers as well as bioformaticians.
He, Bin; Li, Wen-Cui; Yang, Chao; Wang, Si-Qiong; Lu, An-Hui
2016-01-26
We have developed an electrolysis approach that allows effective and uniform incorporation of sulfur inside the micropores of carbon nanosheets for advanced lithium-sulfur batteries. The sulfur-carbon hybrid can be prepared with a 70 wt % sulfur loading, in which no nonconductive sulfur agglomerations are formed. Because the incorporated sulfur is electrically connected to the carbon matrix in nature, the hybrid cathode shows excellent electrochemical performance, including a high reversible capacity, good rate capability, and good cycling stability, as compared to one prepared using the popular melt-diffusion method.
Ilk Capar, M; Nar, A; Ferrarini, A; Frezza, E; Greco, C; Zakharov, A V; Vakulenko, A A
2013-03-21
The connection between the molecular structure of liquid crystals and their elastic properties, which control the director deformations relevant for electro-optic applications, remains a challenging objective for theories and computations. Here, we compare two methods that have been proposed to this purpose, both characterized by a detailed molecular level description. One is an integrated molecular dynamics-statistical mechanical approach, where the bulk elastic constants of nematics are calculated from the direct correlation function (DCFs) and the single molecule orientational distribution function [D. A. McQuarrie, Statistical Mechanics (Harper & Row, New York, 1973)]. The latter is obtained from atomistic molecular dynamics trajectories, together with the radial distribution function, from which the DCF is then determined by solving the Ornstein-Zernike equation. The other approach is based on a molecular field theory, where the potential of mean torque experienced by a mesogen in the liquid crystal phase is parameterized according to its molecular surface. In this case, the calculation of elastic constants is combined with the Monte Carlo sampling of single molecule conformations. Using these different approaches, but the same description, at the level of molecular geometry and torsional potentials, we have investigated the elastic properties of the nematic phase of two typical mesogens, 4'-n-pentyloxy-4-cyanobiphenyl and 4'-n-heptyloxy-4-cyanobiphenyl. Both methods yield K3(bend) >K1 (splay) >K2 (twist), although there are some discrepancies in the average elastic constants and in their anisotropy. These are interpreted in terms of the different approximations and the different ways of accounting for the structural properties of molecules in the two approaches. In general, the results point to the role of the molecular shape, which is modulated by the conformational freedom and cannot be fully accounted for by a single descriptor such as the aspect ratio.
NASA Astrophysics Data System (ADS)
Capar, M. Ilk; Nar, A.; Ferrarini, A.; Frezza, E.; Greco, C.; Zakharov, A. V.; Vakulenko, A. A.
2013-03-01
The connection between the molecular structure of liquid crystals and their elastic properties, which control the director deformations relevant for electro-optic applications, remains a challenging objective for theories and computations. Here, we compare two methods that have been proposed to this purpose, both characterized by a detailed molecular level description. One is an integrated molecular dynamics-statistical mechanical approach, where the bulk elastic constants of nematics are calculated from the direct correlation function (DCFs) and the single molecule orientational distribution function [D. A. McQuarrie, Statistical Mechanics (Harper & Row, New York, 1973)]. The latter is obtained from atomistic molecular dynamics trajectories, together with the radial distribution function, from which the DCF is then determined by solving the Ornstein-Zernike equation. The other approach is based on a molecular field theory, where the potential of mean torque experienced by a mesogen in the liquid crystal phase is parameterized according to its molecular surface. In this case, the calculation of elastic constants is combined with the Monte Carlo sampling of single molecule conformations. Using these different approaches, but the same description, at the level of molecular geometry and torsional potentials, we have investigated the elastic properties of the nematic phase of two typical mesogens, 4'-n-pentyloxy-4-cyanobiphenyl and 4'-n-heptyloxy-4-cyanobiphenyl. Both methods yield K3(bend) >K1 (splay) >K2 (twist), although there are some discrepancies in the average elastic constants and in their anisotropy. These are interpreted in terms of the different approximations and the different ways of accounting for the structural properties of molecules in the two approaches. In general, the results point to the role of the molecular shape, which is modulated by the conformational freedom and cannot be fully accounted for by a single descriptor such as the aspect ratio.
NASA Astrophysics Data System (ADS)
Makino, Hironori; Minami, Nariyuki
2014-07-01
The theory of the quantal level statistics of a classically integrable system, developed by Makino et al. in order to investigate the non-Poissonian behaviors of level-spacing distribution (LSD) and level-number variance (LNV) [H. Makino and S. Tasaki, Phys. Rev. E 67, 066205 (2003); H. Makino and S. Tasaki, Prog. Theor. Phys. Suppl. 150, 376 (2003); H. Makino, N. Minami, and S. Tasaki, Phys. Rev. E 79, 036201 (2009); H. Makino and S. Tasaki, Prog. Theor. Phys. 114, 929 (2005)], is successfully extended to the study of the E(K,L) function, which constitutes a fundamental measure to determine most statistical observables of quantal levels in addition to LSD and LNV. In the theory of Makino et al., the eigenenergy level is regarded as a superposition of infinitely many components whose formation is supported by the Berry-Robnik approach in the far semiclassical limit [M. Robnik, Nonlinear Phenom. Complex Syst. 1, 1 (1998)]. We derive the limiting E(K,L) function in the limit of infinitely many components and elucidate its properties when energy levels show deviations from the Poisson statistics.
Isar, Jasmine; Agarwal, Lata; Saran, Saurabh; Kaushik, Rekha; Saxena, Rajendra Kumar
2007-04-01
A statistical approach response surface methodology (RSM) was used to study the production of succinic acid from Bacteroides fragilis. The most influential parameters for succinic acid production obtained through one-at-a-time method were glucose, tryptone, sodium carbonate, inoculum size and incubation period. These resulted in the production of 5.4gL(-1) of succinic acid in 48h from B. fragilis under anaerobic conditions. Based on these results, a statistical method, face-centered central composite design (FCCCD) falling under RSM was employed for further enhancing the succinic acid production and to monitor the interactive effect of these parameters, which resulted in a more than 2-fold increase in yield (12.5gL(-1) in 24h). The analysis of variance (ANOVA) showed the adequacy of the model and the verification experiments confirmed its validity. On subsequent scale-up in a 10-L bioreactor using conditions optimized through RSM, 20.0gL(-1) of succinic acid was obtained in 24h. This clearly indicated that the model stood valid even on large scale. Thus, the statistical optimization strategy led to an approximately 4-fold increase in the yield of succinic acid. This is the first report on the use of FCCCD to improve succinic acid production from B. fragilis. The present study provides useful information about the regulation of succinic acid synthesis through manipulation of various physiochemical parameters.
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar
2016-02-01
The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders
A statistical physics approach to scale-free networks and their behaviors
NASA Astrophysics Data System (ADS)
Wu, Fang
This thesis studies five problems of network properties from a unified local-to-global viewpoint of statistical physics: (1) We propose an algorithm that allows the discovery of communities within graphs of arbitrary size, based on Kirchhoff theory of electric networks. Its time complexity scales linearly with the network size. We additionally show how this algorithm allows for the swift discovery of the community surrounding a given node without having to extract all the communities out of a graph. (2) We present a dynamical theory of opinion formation that takes explicitly into account the structure of the social network in which individuals are embedded. We show that the weighted fraction of the population that holds a certain opinion is a martingale. We show that the importance of a given node is proportional to its degree. We verify our predictions by simulations. (3) We show that, when the information transmissibility decays with distance, the epidemic spread on a scale-free network has a finite threshold. We test our predictions by measuring the spread of messages in an organization and by numerical experiments. (4) Suppose users can switch between two behaviors when entering a queueing system: one that never restarts an initial request and one that restarts infinitely often. We show the existence of two thresholds. When the system load is below the lower threshold, it is always better off to be impatient. When above, it is always better off to be patient. Between the two thresholds there exists a homogeneous Nash equilibrium with non-trivial properties. We obtain exact solutions for the two thresholds. (5) We study the endogenous dynamics of reputations in a system consisting of firms with long horizons that provide services with varying levels of quality, and customers who assign to them reputations on the basis of the quality levels that they experience when interacting with them. We show that the dynamics can lead to either well defined equilibria or
Po River plume and Northern Adriatic Dense Waters: a modeling and statistical approach.
NASA Astrophysics Data System (ADS)
Marcello Falcieri, Francesco; Benetazzo, Alvise; Sclavo, Mauro; Carniel, Sandro; Bergamasco, Andrea; Bonaldo, Davide; Barbariol, Francesco; Russo, Aniello
2014-05-01
The semi enclosed Adriatic Sea, located in the North-Eastern part of the Mediterranean Sea, is a small regional sea strongly influenced by riverine inputs. In its northern shallow sub-basin both the physical and biogeochemical features are strongly influenced by the Po River (together with some other minor ones) through its freshwater plume, by buoyancy changes and nutrients and sediments loads. The major outcomes of this interaction are on primary production, on the rising of hypoxic and anoxic bottom water conditions, on the formation of strong salinity gradients (that influence the water column structure and both coastal and basinwide circulation) and on the formation processes of the Northern Adriatic Dense Water (NAdDW). The NAdDW is a dense water mass that is formed during winter in the shallow Northern Adriatic under buoyancy loss conditions; it then travels southwardly along the Italian coasts reaching the Southern Adriatic after a few months. The NAdDW formation process is mostly locally wind driven but it has been proved that freshwater discharges play an important preconditioning role, starting since the summer previous to the formation period. To investigate the relationship between the Po plume (as a preconditioning factor) and the subsequent dense water formation, the results obtained by a numerical simulation with the Regional Ocean Modelling System (ROMS) have been statistically analyzed. The model has been implemented over the whole basin with a 2 km regular grid, and surface fluxes computed through a bulk fluxes formulation using an high resolution meteorological model (COSMO I7). The only open boundary (the Otranto Strait) is imposed from an operational Mediterranean model (MFS) and main rivers discharges are introduced as a freshwater mass fluxes measured by river gauges closest to the rivers' mouths. The model was run for 8 years, from 2003 to 2010. The Po plume was analysed with a 2x3 Self-Organizing Map (SOM) and two major antithetic patterns
Hannequin, Pascal Paul
2015-06-07
Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images.Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable.The proposed filters have been applied to nuclear
NASA Astrophysics Data System (ADS)
Hannequin, Pascal Paul
2015-06-01
Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images. Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable. The proposed filters have been applied to nuclear
NASA Astrophysics Data System (ADS)
Hein, H.; Mai, S.; Mayer, B.; Pohlmann, T.; Barjenbruch, U.
2012-04-01
The interactions of tides, external surges, storm surges and waves with an additional role of the coastal bathymetry define the probability of extreme water levels at the coast. Probabilistic analysis and also process based numerical models allow the estimation of future states. From the physical point of view both, deterministic processes and stochastic residuals are the fundamentals of high water statistics. This study uses a so called model chain to reproduce historic statistics of tidal high water levels (Thw) as well as the prediction of future statistics high water levels. The results of the numerical models are post-processed by a stochastic analysis. Recent studies show, that for future extrapolation of extreme Thw nonstationary parametric approaches are required. With the presented methods a better prediction of time depended parameter sets seems possible. The investigation region of this study is the southern German Bright. The model-chain is the representation of a downscaling process, which starts with an emissions scenario. Regional atmospheric and ocean models refine the results of global climate models. The concept of downscaling was chosen to resolve coastal topography sufficiently. The North Sea and estuaries are modeled with the three-dimensional model HAMburg Shelf Ocean Model. The running time includes 150 years (1950 - 2100). Results of four different hindcast runs and also of one future prediction run are validated. Based on multi-scale analysis and the theory of entropy we analyze whether any significant periodicities are represented numerically. Results show that also hindcasting the climate of Thw with a model chain for the last 60 years is a challenging task. For example, an additional modeling activity must be the inclusion of tides into regional climate ocean models. It is found that the statistics of climate variables derived from model results differs from the statistics derived from measurements. E.g. there are considerable shifts in
Arostegui, Inmaculada; Núñez-Antón, Vicente; Quintana, José M
2012-04-01
Patient-reported outcomes (PRO) are used as primary endpoints in medical research and their statistical analysis is an important methodological issue. Theoretical assumptions of the selected methodology and interpretation of its results are issues to take into account when selecting an appropriate statistical technique to analyse data. We present eight methods of analysis of a popular PRO tool under different assumptions that lead to different interpretations of the results. All methods were applied to responses obtained from two of the health dimensions of the SF-36 Health Survey. The proposed methods are: multiple linear regression (MLR), with least square and bootstrap estimations, tobit regression, ordinal logistic and probit regressions, beta-binomial regression (BBR), binomial-logit-normal regression (BLNR) and coarsening. Selection of an appropriate model depends not only on its distributional assumptions but also on the continuous or ordinal features of the response and the fact that they are constrained to a bounded interval. The BBR approach renders satisfactory results in a broad number of situations. MLR is not recommended, especially with skewed outcomes. Ordinal methods are only appropriate for outcomes with a few number of categories. Tobit regression is an acceptable option under normality assumptions and in the presence of moderate ceiling or floor effect. The BLNR and coarsening proposals are also acceptable, but only under certain distributional assumptions that are difficult to test a priori. Interpretation of the results is more convenient when using the BBR, BLNR and ordinal logistic regression approaches.
Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach
NASA Technical Reports Server (NTRS)
Chan, William M.; Kim, Noah; Pandya, Shishir A.
2012-01-01
Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.
NASA Technical Reports Server (NTRS)
Riha, Andrew P.
2005-01-01
As humans and robotic technologies are deployed in future constellation systems, differing traffic services will arise, e.g., realtime and non-realtime. In order to provide a quality of service framework that would allow humans and robotic technologies to interoperate over a wide and dynamic range of interactions, a method of classifying data as realtime or non-realtime is needed. In our paper, we present an approach that leverages the Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) data link protocol. Specifically, we redefine the AOS Transfer Frame Replay Flag in order to provide an automated store-and-forward approach on a per-service basis for use in the next-generation Interplanetary Network. In addition to addressing the problem of intermittent connectivity and associated services, we propose a follow-on methodology for prioritizing data through further modification of the AOS Transfer Frame.
Matson, Kevin D.; Tieleman, B. Irene
2011-01-01
The immune system is a complex collection of interrelated and overlapping solutions to the problem of disease. To deal with this complexity, researchers have devised multiple ways to measure immune function and to analyze the resulting data. In this way both organisms and researchers employ many tactics to solve a complex problem. One challenge facing ecological immunologists is the question of how these many dimensions of immune function can be synthesized to facilitate meaningful interpretations and conclusions. We tackle this challenge by employing and comparing several statistical methods, which we used to test assumptions about how multiple aspects of immune function are related at different organizational levels. We analyzed three distinct datasets that characterized 1) species, 2) subspecies, and 3) among- and within-individual level differences in the relationships among multiple immune indices. Specifically, we used common principal components analysis (CPCA) and two simpler approaches, pair-wise correlations and correlation circles. We also provide a simple example of how these techniques could be used to analyze data from multiple studies. Our findings lead to several general conclusions. First, relationships among indices of immune function may be consistent among some organizational groups (e.g. months over the annual cycle) but not others (e.g. species); therefore any assumption of consistency requires testing before further analyses. Second, simple statistical techniques used in conjunction with more complex multivariate methods give a clearer and more robust picture of immune function than using complex statistics alone. Moreover, these simpler approaches have potential for analyzing comparable data from multiple studies, especially as the field of ecological immunology moves towards greater methodological standardization. PMID:21526186
An Ontological-Fuzzy Approach to Advance Reservation in Multi-Cluster Grids
NASA Astrophysics Data System (ADS)
Ferreira, D. J.; Dantas, M. A. R.; Bauer, Michael A.
2010-11-01
Advance reservation is an important mechanism for a successful utilization of available resources in distributed multi-cluster environments. This mechanism allows, for example, a user to provide parameters aiming to satisfy requirements related to applications' execution time and temporal dependence. This predictability can lead the system to reach higher levels of QoS. However, the support for advance reservation has been restricted due to the complexity of large scale configurations and also dynamic changes verified in these systems. In this research work it is proposed an advance reservation method, based on a ontology-fuzzy approach. It allows a user to reserve a wide variety of resources and enable large jobs to be reserved among different nodes. In addition, it dynamically verifies the possibility of reservation with the local RMS, avoiding future allocation conflicts. Experimental results of the proposal, through simulation, indicate that the proposed mechanism reached a successful level of flexibility for large jobs and more appropriated distribution of resources in a distributed multi-cluster configuration.
Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos
2011-03-01
The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.
Estimation of turbulent channel flow based on the wall measurement with a statistical approach
NASA Astrophysics Data System (ADS)
Hasegawa, Yosuke; Suzuki, Takao
2016-11-01
A turbulent channel flow at Ret au = 100 with periodic boundary conditions is estimated with linear stochastic estimation only based on the wall measurement, i.e. the shear-stress in the streamwise and spanwise directions as well as the pressure over the entire wavenumbers. The results reveal that instantaneous measurement on the wall governs the success of the estimation in y+ < 20. Degrees of agreement are equivalent to those reported by Chevalier et al. (2006) using a data-assimilation approach. This suggests that the instantaneous wall information dictates the estimation rather than the estimator solving the dynamical system. We feed the velocity components from the linear stochastic estimation via the body-force term into the Navier-Stokes system; however, the estimation slightly improves in the log layer, indicating some benefit of involving a dynamical system but over-suppression of turbulent kinetic energy beyond the viscous sublayer by the linear stochastic estimation. Motions inaccurately estimated in the buffer layer prevent from further reconstruction toward the centerline even if we relax the feedback forcing and let the flow evolve nonlinearly through the estimator. We also argue the inherent limitation of turbulent flow estimation based on the wall measurement.
Han, Fu Liang; Li, Zheng; Xu, Yan
2015-12-01
Monomeric anthocyanin contributions to young red wine color were investigated using partial least square regression (PLSR) and aqueous alcohol solutions in this study. Results showed that the correlation between the anthocyanin concentration and the solution color fitted in a quadratic regression rather than linear or cubic regression. Malvidin-3-O-glucoside was estimated to show the highest contribution to young red wine color according to its concentration in wine, whereas peonidin-3-O-glucoside in its concentration contributed the least. The PLSR suggested that delphinidin-3-O-glucoside and peonidin-3-O-glucoside under the same concentration resulted in a stronger color of young red wine compared with malvidin-3-O-glucoside. These estimates were further confirmed by their color in aqueous alcohol solutions. These results suggested that delphinidin-3-O-glucoside and peonidin-3-O-glucoside were primary anthocyanins to enhance young red wine color by increasing their concentrations. This study could provide an alternative approach to improve young red wine color by adjusting anthocyanin composition and concentration.
Adopting a practical statistical approach for evaluating assay agreement in drug discovery.
Sun, Dongyu; Whitty, Adrian; Papadatos, James; Newman, Miki; Donnelly, Jason; Bowes, Scott; Josiah, Serene
2005-08-01
The authors assess the equivalence of 2 assays and put forward a general approach for assay agreement analysis that can be applied during drug discovery. Data sets generated by different assays are routinely compared to each other during the process of drug discovery. For a given target, the assays used for high-throughput screening and structure-activity relationship studies will most likely differ in their assay reagents, assay conditions, and/or detection technology, which makes the interpretation of data between assays difficult, particularly as most assays are used to measure quantitative changes in compound potency against the target. To better quantify the relationship of data sets from different assays for the same target, the authors evaluated the agreement between results generated by 2 different assays that measure the activity of compounds against the same protein, ALK5. The authors show that the agreement between data sets can be quantified using correlation and Bland-Altman plots, and the precision of the assays can be used to define the expectations of agreement between 2 assays. They propose a scheme for addressing issues of assay data equivalence, which can be applied to address questions of how data sets compare during the lead identification and lead optimization processes in which assays are frequently added and changed.
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
STATISTICS OF MEASURING NEUTRON STAR RADII: ASSESSING A FREQUENTIST AND A BAYESIAN APPROACH
Özel, Feryal; Psaltis, Dimitrios
2015-09-10
Measuring neutron star radii with spectroscopic and timing techniques relies on the combination of multiple observables to break the degeneracies between the mass and radius introduced by general relativistic effects. Here, we explore a previously used frequentist and a newly proposed Bayesian framework to obtain the most likely value and the uncertainty in such a measurement. We find that for the expected range of masses and radii and for realistic measurement errors, the frequentist approach suffers from biases that are larger than the accuracy in the radius measurement required to distinguish between the different equations of state. In contrast, in the Bayesian framework, the inferred uncertainties are larger, but the most likely values do not suffer from such biases. We also investigate ways of quantifying the degree of consistency between different spectroscopic measurements from a single source. We show that a careful assessment of the systematic uncertainties in the measurements eliminates the need for introducing ad hoc biases, which lead to artificially large inferred radii.
Chiu, Weihsueh A.; Euling, Susan Y.; Scott, Cheryl Siegel; Subramaniam, Ravi P.
2013-09-15
The contribution of genomics and associated technologies to human health risk assessment for environmental chemicals has focused largely on elucidating mechanisms of toxicity, as discussed in other articles in this issue. However, there is interest in moving beyond hazard characterization to making more direct impacts on quantitative risk assessment (QRA) — i.e., the determination of toxicity values for setting exposure standards and cleanup values. We propose that the evolution of QRA of environmental chemicals in the post-genomic era will involve three, somewhat overlapping phases in which different types of approaches begin to mature. The initial focus (in Phase I) has been and continues to be on “augmentation” of weight of evidence — using genomic and related technologies qualitatively to increase the confidence in and scientific basis of the results of QRA. Efforts aimed towards “integration” of these data with traditional animal-based approaches, in particular quantitative predictors, or surrogates, for the in vivo toxicity data to which they have been anchored are just beginning to be explored now (in Phase II). In parallel, there is a recognized need for “expansion” of the use of established biomarkers of susceptibility or risk of human diseases and disorders for QRA, particularly for addressing the issues of cumulative assessment and population risk. Ultimately (in Phase III), substantial further advances could be realized by the development of novel molecular and pathway-based biomarkers and statistical and in silico models that build on anticipated progress in understanding the pathways of human diseases and disorders. Such efforts would facilitate a gradual “reorientation” of QRA towards approaches that more directly link environmental exposures to human outcomes.