Sample records for sufficiency statistics

  1. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  2. Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Sagawa, Takahiro

    2018-04-01

    A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.

  3. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  4. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  5. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  6. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2017-06-01

    One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.

  7. Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui

    2018-04-01

    This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.

  8. An empirical approach to sufficient similarity in dose-responsiveness: Utilization of statistical distance as a similarity measure.

    EPA Science Inventory

    Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...

  9. Targeted On-Demand Team Performance App Development

    DTIC Science & Technology

    2016-10-01

    from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes.  What opportunities for

  10. Georg Rasch and Benjamin Wright's Struggle with the Unidimensional Polytomous Model with Sufficient Statistics

    ERIC Educational Resources Information Center

    Andrich, David

    2016-01-01

    This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…

  11. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  12. Infants Segment Continuous Events Using Transitional Probabilities

    ERIC Educational Resources Information Center

    Stahl, Aimee E.; Romberg, Alexa R.; Roseberry, Sarah; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn

    2014-01-01

    Throughout their 1st year, infants adeptly detect statistical structure in their environment. However, little is known about whether statistical learning is a primary mechanism for event segmentation. This study directly tests whether statistical learning alone is sufficient to segment continuous events. Twenty-eight 7- to 9-month-old infants…

  13. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  14. Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱

    PubMed Central

    Marcum, Christopher Steven; Butts, Carter T.

    2015-01-01

    The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488

  15. Statistics and Title VII Proof: Prima Facie Case and Rebuttal.

    ERIC Educational Resources Information Center

    Whitten, David

    1978-01-01

    The method and means by which statistics can raise a prima facie case of Title VII violation are analyzed. A standard is identified that can be applied to determine whether a statistical disparity is sufficient to shift the burden to the employer to rebut a prima facie case of discrimination. (LBH)

  16. Retrocausal Effects As A Consequence of Orthodox Quantum Mechanics Refined To Accommodate The Principle Of Sufficient Reason

    NASA Astrophysics Data System (ADS)

    Stapp, Henry P.

    2011-11-01

    The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determined by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.

  17. Vibration Transmission through Rolling Element Bearings in Geared Rotor Systems

    DTIC Science & Technology

    1990-11-01

    147 4.8 Concluding Remarks ........................................................... 153 V STATISTICAL ENERGY ANALYSIS ............................................ 155...and dynamic finite element techniques are used to develop the discrete vibration models while statistical energy analysis method is used for the broad...bearing system studies, geared rotor system studies, and statistical energy analysis . Each chapter is self sufficient since it is written in a

  18. Retrocausal Effects as a Consequence of Quantum Mechanics Refined to Accommodate the Principle of Sufficient Reason

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry P.

    2011-05-10

    The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determinedmore » by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.« less

  19. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    PubMed

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  20. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  1. Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.

    PubMed

    Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R

    2007-12-01

    After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.

  2. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    DTIC Science & Technology

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  3. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  4. Role of sufficient phosphorus in biodiesel production from diatom Phaeodactylum tricornutum.

    PubMed

    Yu, Shi-Jin; Shen, Xiao-Fei; Ge, Huo-Qing; Zheng, Hang; Chu, Fei-Fei; Hu, Hao; Zeng, Raymond J

    2016-08-01

    In order to study the role of sufficient phosphorus (P) in biodiesel production by microalgae, Phaeodactylum tricornutum were cultivated in six different media treatments with combination of nitrogen (N) sufficiency/deprivation and phosphorus sufficiency/limitation/deprivation. Profiles of N and P, biomass, and fatty acids (FAs) content and compositions were measured during a 7-day cultivation period. The results showed that the FA content in microalgae biomass was promoted by P deprivation. However, statistical analysis showed that FA productivity had no significant difference (p = 0.63, >0.05) under the treatments of N deprivation with P sufficiency (N-P) and N deprivation with P deprivation (N-P-), indicating P sufficiency in N deprivation medium has little effect on increasing biodiesel productivity from P. triornutum. It was also found that the P absorption in N-P medium was 1.41 times higher than that in N sufficiency and P sufficiency (NP) medium. N deprivation with P limitation (N-P-l) was the optimal treatment for producing biodiesel from P. triornutum because of both the highest FA productivity and good biodiesel quality.

  5. Stationary conditions for stochastic differential equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Walker, W. W.

    1972-01-01

    This is a preliminary study of possible necessary and sufficient conditions to insure stationarity in the solution process for a stochastic differential equation. It indirectly sheds some light on ergodicity properties and shows that the spectral density is generally inadequate as a statistical measure of the solution. Further work is proceeding on a more general theory which gives necessary and sufficient conditions in a form useful for applications.

  6. Identifying natural flow regimes using fish communities

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai, Wen-Ping; Wu, Tzu-Ching; Chen, Hung-kwai; Herricks, Edwin E.

    2011-10-01

    SummaryModern water resources management has adopted natural flow regimes as reasonable targets for river restoration and conservation. The characterization of a natural flow regime begins with the development of hydrologic statistics from flow records. However, little guidance exists for defining the period of record needed for regime determination. In Taiwan, the Taiwan Eco-hydrological Indicator System (TEIS), a group of hydrologic statistics selected for fisheries relevance, is being used to evaluate ecological flows. The TEIS consists of a group of hydrologic statistics selected to characterize the relationships between flow and the life history of indigenous species. Using the TEIS and biosurvey data for Taiwan, this paper identifies the length of hydrologic record sufficient for natural flow regime characterization. To define the ecological hydrology of fish communities, this study connected hydrologic statistics to fish communities by using methods to define antecedent conditions that influence existing community composition. A moving average method was applied to TEIS statistics to reflect the effects of antecedent flow condition and a point-biserial correlation method was used to relate fisheries collections with TEIS statistics. The resulting fish species-TEIS (FISH-TEIS) hydrologic statistics matrix takes full advantage of historical flows and fisheries data. The analysis indicates that, in the watersheds analyzed, averaging TEIS statistics for the present year and 3 years prior to the sampling date, termed MA(4), is sufficient to develop a natural flow regime. This result suggests that flow regimes based on hydrologic statistics for the period of record can be replaced by regimes developed for sampled fish communities.

  7. Toward Self Sufficiency: Social Issues in the Nineties. Proceedings of the National Association for Welfare Research and Statistics (33rd, Scottsdale, Arizona, August 7-11, 1993).

    ERIC Educational Resources Information Center

    National Association for Welfare Research and Statistics, Olympia, WA.

    The presentations compiled in these proceedings on welfare and self-sufficiency reflect much of the current research in areas of housing, health, employment and training, welfare and reform, nutrition, child support, child care, and youth. The first section provides information on the conference and on the National Association for Welfare Research…

  8. Which Variables Associated with Data-Driven Instruction Are Believed to Best Predict Urban Student Achievement?

    ERIC Educational Resources Information Center

    Greer, Wil

    2013-01-01

    This study identified the variables associated with data-driven instruction (DDI) that are perceived to best predict student achievement. Of the DDI variables discussed in the literature, 51 of them had a sufficient enough research base to warrant statistical analysis. Of them, 26 were statistically significant. Multiple regression and an…

  9. Wind speed statistics for Goldstone, California, anemometer sites

    NASA Technical Reports Server (NTRS)

    Berg, M.; Levy, R.; Mcginness, H.; Strain, D.

    1981-01-01

    An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.

  10. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    PubMed

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  11. Integrated Cognitive-neuroscience Architectures for Understanding Sensemaking (ICArUS): A Computational Basis for ICArUS Challenge Problem Design

    DTIC Science & Technology

    2014-11-01

    Kullback , S., & Leibler , R. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79...cognitive challenges of sensemaking only informally using conceptual notions like "framing" and "re-framing", which are not sufficient to support T&E in...appropriate frame(s) from memory. Assess the Frame: Evaluate the quality of fit between data and frame. Generate Hypotheses: Use the current

  12. Evaluating sufficient similarity for drinking-water disinfection by-product (DBP) mixtures with bootstrap hypothesis test procedures.

    PubMed

    Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn

    2009-01-01

    In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.

  13. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  14. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    ERIC Educational Resources Information Center

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  15. Determinants of Whether or not Mixtures of Disinfection By-products are Similar

    EPA Science Inventory

    This project summary and its related publications provide information on the development of chemical, toxicological and statistical criteria for determining the sufficient similarity of complex chemical mixtures.

  16. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  17. On information, negentropy and H-theorem

    NASA Astrophysics Data System (ADS)

    Chakrabarti, C. G.; Sarker, N. G.

    1983-09-01

    The paper deals with the imprtance of the Kullback descrimination information in the statistical characterization of negentropy of non-equilibrium state and the irreversibility of a classical dynamical system. The theory based on the Kullback discrimination information as the H-function gives new insight into the interrelation between the concepts of coarse-graining and the principle of sufficiency leading to important statistical characterization of thermal equilibrium of a closed system.

  18. Purpose Restrictions on Information Use

    DTIC Science & Technology

    2013-06-03

    Employees are authorized to access Customer Information for business purposes only.” [5]. The HIPAA Privacy Rule requires that healthcare providers in the...outcomes can be probabilistic since the network does not know what ad will be best for each visitor but does have statistical information about various...beliefs as such beliefs are a sufficient statistic . Thus, the agent need only consider for each possible belief β it can have, what action it would

  19. Sufficient condition for a finite-time singularity in a high-symmetry Euler flow: Analysis and statistics

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    1996-08-01

    A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.

  20. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    PubMed

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  1. A novel approach for choosing summary statistics in approximate Bayesian computation.

    PubMed

    Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas

    2012-11-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.

  2. A Novel Approach for Choosing Summary Statistics in Approximate Bayesian Computation

    PubMed Central

    Aeschbacher, Simon; Beaumont, Mark A.; Futschik, Andreas

    2012-01-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θanc = 4Neu) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L2-loss performs best. Applying that method to the ibex data, we estimate θ^anc≈1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10−4 and 3.5 × 10−3 per locus per generation. The proportion of males with access to matings is estimated as ω^≈0.21, which is in good agreement with recent independent estimates. PMID:22960215

  3. Determinants of 25(OH)D sufficiency in obese minority children: selecting outcome measures and analytic approaches.

    PubMed

    Zhou, Ping; Schechter, Clyde; Cai, Ziyong; Markowitz, Morri

    2011-06-01

    To highlight complexities in defining vitamin D sufficiency in children. Serum 25-(OH) vitamin D [25(OH)D] levels from 140 healthy obese children age 6 to 21 years living in the inner city were compared with multiple health outcome measures, including bone biomarkers and cardiovascular risk factors. Several statistical analytic approaches were used, including Pearson correlation, analysis of covariance (ANCOVA), and "hockey stick" regression modeling. Potential threshold levels for vitamin D sufficiency varied by outcome variable and analytic approach. Only systolic blood pressure (SBP) was significantly correlated with 25(OH)D (r = -0.261; P = .038). ANCOVA revealed that SBP and triglyceride levels were statistically significant in the test groups [25(OH)D <10, <15 and <20 ng/mL] compared with the reference group [25(OH)D >25 ng/mL]. ANCOVA also showed that only children with severe vitamin D deficiency [25(OH)D <10 ng/mL] had significantly higher parathyroid hormone levels (Δ = 15; P = .0334). Hockey stick model regression analyses found evidence of a threshold level in SBP, with a 25(OH)D breakpoint of 27 ng/mL, along with a 25(OH)D breakpoint of 18 ng/mL for triglycerides, but no relationship between 25(OH)D and parathyroid hormone. Defining vitamin D sufficiency should take into account different vitamin D-related health outcome measures and analytic methodologies. Copyright © 2011 Mosby, Inc. All rights reserved.

  4. UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.

    PubMed

    Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois

    2018-03-01

    Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.

  5. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    PubMed

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Statistical Approaches to Assess Biosimilarity from Analytical Data.

    PubMed

    Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette

    2017-01-01

    Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.

  7. Contribution of Apollo lunar photography to the establishment of selenodetic control

    NASA Technical Reports Server (NTRS)

    Dermanis, A.

    1975-01-01

    Among the various types of available data relevant to the establishment of geometric control on the moon, the only one covering significant portions of the lunar surface (20%) with sufficient information content, is lunar photography, taken at the proximity of the moon from lunar orbiters. The idea of free geodetic networks is introduced as a tool for the statistical comparison of the geometric aspects of the various data used. Methods were developed for the updating of the statistics of observations and the a priori parameter estimates to obtain statistically consistent solutions by means of the optimum relative weighting concept.

  8. [Lymphocytic infiltration in uveal melanoma].

    PubMed

    Sach, J; Kocur, J

    1993-11-01

    After our observation of lymphocytic infiltration in uveal melanomas we present theoretical review to this interesting topic. Due to relatively low incidence of this feature we haven't got sufficiently large collection of cases for presentation of our statistically significant conclusions.

  9. How Statisticians Speak Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redus, K.S.

    2007-07-01

    The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less

  10. Specious causal attributions in the social sciences: the reformulated stepping-stone theory of heroin use as exemplar.

    PubMed

    Baumrind, D

    1983-12-01

    The claims based on causal models employing either statistical or experimental controls are examined and found to be excessive when applied to social or behavioral science data. An exemplary case, in which strong causal claims are made on the basis of a weak version of the regularity model of cause, is critiqued. O'Donnell and Clayton claim that in order to establish that marijuana use is a cause of heroin use (their "reformulated stepping-stone" hypothesis), it is necessary and sufficient to demonstrate that marijuana use precedes heroin use and that the statistically significant association between the two does not vanish when the effects of other variables deemed to be prior to both of them are removed. I argue that O'Donnell and Clayton's version of the regularity model is not sufficient to establish cause and that the planning of social interventions both presumes and requires a generative rather than a regularity causal model. Causal modeling using statistical controls is of value when it compels the investigator to make explicit and to justify a causal explanation but not when it is offered as a substitute for a generative analysis of causal connection.

  11. Product plots.

    PubMed

    Wickham, Hadley; Hofmann, Heike

    2011-12-01

    We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE

  12. The role of reference in cross-situational word learning.

    PubMed

    Wang, Felix Hao; Mintz, Toben H

    2018-01-01

    Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects' prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words' meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children's language acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  14. Statistical Learning in a Natural Language by 8-Month-Old Infants

    PubMed Central

    Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.

    2013-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896

  15. Statistical learning in a natural language by 8-month-old infants.

    PubMed

    Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R

    2009-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.

  16. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152

  17. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  18. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  19. ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS

    EPA Science Inventory

    Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...

  20. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  1. Innocent Until Proven Guilty

    ERIC Educational Resources Information Center

    Case, Catherine; Whitaker, Douglas

    2016-01-01

    In the criminal justice system, defendants accused of a crime are presumed innocent until proven guilty. Statistical inference in any context is built on an analogous principle: The null hypothesis--often a hypothesis of "no difference" or "no effect"--is presumed true unless there is sufficient evidence against it. In this…

  2. Spatial statistical network models for stream and river temperature in New England, USA

    EPA Science Inventory

    Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May–September growing ...

  3. 77 FR 75196 - Proposed Collection, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-19

    ... from the States' UI accounting files are sufficient for statistical purposes. However, such data are... BLS also has been working very closely with firms providing payroll and tax filing services for.... respondents Respondent responses per response (hours) BLS 3020 (MWR) 133,191 Non-Federal..... 532,764 22.2...

  4. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  5. Edge co-occurrences can account for rapid categorization of natural versus animal images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.; Bednar, James A.

    2015-06-01

    Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

  6. Weighting Statistical Inputs for Data Used to Support Effective Decision Making During Severe Emergency Weather and Environmental Events

    NASA Technical Reports Server (NTRS)

    Gardner, Adrian

    2010-01-01

    National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.

  7. Distinguishing Positive Selection From Neutral Evolution: Boosting the Performance of Summary Statistics

    PubMed Central

    Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas

    2011-01-01

    Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556

  8. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings.

    PubMed

    De Luca, Carlo J; Kline, Joshua C

    2014-12-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles--a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. Copyright © 2014 the American Physiological Society.

  9. Modeling Human-Computer Decision Making with Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Coovert, Michael D.; And Others

    Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…

  10. Stochastic and Historical Resonances of the Unit in Physics and Psychometrics

    ERIC Educational Resources Information Center

    Fisher, William P., Jr.

    2011-01-01

    Humphry's article, "The Role of the Unit in Physics and Psychometrics," offers fundamental clarifications of measurement concepts that Fisher hopes will find a wide audience. In particular, parameterizing discrimination while preserving statistical sufficiency will indeed provide greater flexibility in accounting "for the effects of empirical…

  11. Is Neurofeedback an Efficacious Treatment for ADHD? A Randomised Controlled Clinical Trial

    ERIC Educational Resources Information Center

    Gevensleben, Holger; Holl, Birgit; Albrecht, Bjorn; Vogel, Claudia; Schlamp, Dieter; Kratz, Oliver; Studer, Petra; Rothenberger, Aribert; Moll, Gunther H.; Heinrich, Hartmut

    2009-01-01

    Background: For children with attention deficit/hyperactivity disorder (ADHD), a reduction of inattention, impulsivity and hyperactivity by neurofeedback (NF) has been reported in several studies. But so far, unspecific training effects have not been adequately controlled for andor studies do not provide sufficient statistical power. To overcome…

  12. A Methodological Review of Structural Equation Modelling in Higher Education Research

    ERIC Educational Resources Information Center

    Green, Teegan

    2016-01-01

    Despite increases in the number of articles published in higher education journals using structural equation modelling (SEM), research addressing their statistical sufficiency, methodological appropriateness and quantitative rigour is sparse. In response, this article provides a census of all covariance-based SEM articles published up until 2013…

  13. Estimating Common Parameters of Lognormally Distributed Environmental and Biomonitoring Data: Harmonizing Disparate Statistics from Publications

    EPA Science Inventory

    The progression of science is driven by the accumulation of knowledge and builds upon published work of others. Another important feature is to place current results into the context of previous observations. The published literature, however, often does not provide sufficient di...

  14. Statistical error propagation in ab initio no-core full configuration calculations of light nuclei

    DOE PAGES

    Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...

    2015-12-28

    We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .

  15. Reciprocity in directed networks

    NASA Astrophysics Data System (ADS)

    Yin, Mei; Zhu, Lingjiong

    2016-04-01

    Reciprocity is an important characteristic of directed networks and has been widely used in the modeling of World Wide Web, email, social, and other complex networks. In this paper, we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble, the canonical ensemble, and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities. The sparse case is also studied for the grand canonical ensemble. Extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed.

  16. Entrepreneurship by any other name: self-sufficiency versus innovation.

    PubMed

    Parker Harris, Sarah; Caldwell, Kate; Renko, Maija

    2014-01-01

    Entrepreneurship has been promoted as an innovative strategy to address the employment of people with disabilities. Research has predominantly focused on the self-sufficiency aspect without fully integrating entrepreneurship literature in the areas of theory, systems change, and demonstration projects. Subsequently there are gaps in services, policies, and research in this field that, in turn, have limited our understanding of the support needs and barriers or facilitators of entrepreneurs with disabilities. A thorough analysis of the literature in these areas led to the development of two core concepts that need to be addressed in integrating entrepreneurship into disability employment research and policy: clarity in operational definitions and better disability statistics and outcome measures. This article interrogates existing research and policy efforts in this regard to argue for a necessary shift in the field from focusing on entrepreneurship as self-sufficiency to understanding entrepreneurship as innovation.

  17. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  18. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  19. On the statistical assessment of classifiers using DNA microarray data

    PubMed Central

    Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F

    2006-01-01

    Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171

  20. Methods for the evaluation of alternative disaster warning systems

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.

    1977-01-01

    For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.

  1. Implicit Language Learning: Adults' Ability to Segment Words in Norwegian

    ERIC Educational Resources Information Center

    Kittleson, Megan M.; Aguilar, Jessica M.; Tokerud, Gry Line; Plante, Elena; Asbjornsen, Arve E.

    2010-01-01

    Previous language learning research reveals that the statistical properties of the input offer sufficient information to allow listeners to segment words from fluent speech in an artificial language. The current pair of studies uses a natural language to test the ecological validity of these findings and to determine whether a listener's language…

  2. Extreme Vertical Gusts in the Atmospheric Boundary Layer

    DTIC Science & Technology

    2015-07-01

    significant effect on the statistics of the rare, extreme gusts. In the lowest 5,000 ft, boundary layer effects make small to moderate vertical...4 2.4 Effects of Gust Shape ............................................................................................... 5... Definitions Adiabatic Lapse Rate The rate of change of temperature with altitude that would occur if a parcel of air was transported sufficiently

  3. Lod score curves for phase-unknown matings.

    PubMed

    Hulbert-Shearon, T; Boehnke, M; Lange, K

    1996-01-01

    For a phase-unknown nuclear family, we show that the likelihood and lod score are unimodal, and we describe conditions under which the maximum occurs at recombination fraction theta = 0, theta = 1/2, and 0 < theta < 1/2. These simply stated necessary and sufficient conditions seem to have escaped the notice of previous statistical geneticists.

  4. 76 FR 17107 - Fisheries of the Exclusive Economic Zone Off Alaska; Application for an Exempted Fishing Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... experimental design requires this quantity of salmon to ensure statistically valid results. The applicant also... encounters sufficient concentrations of salmon and pollock for meeting the experimental design. Groundfish... of the groundfish harvested is expected to be pollock. The experimental design requires this quantity...

  5. Multimedia Presentations in Educational Measurement and Statistics: Design Considerations and Instructional Approaches

    ERIC Educational Resources Information Center

    Sklar, Jeffrey C.; Zwick, Rebecca

    2009-01-01

    Proper interpretation of standardized test scores is a crucial skill for K-12 teachers and school personnel; however, many do not have sufficient knowledge of measurement concepts to appropriately interpret and communicate test results. In a recent four-year project funded by the National Science Foundation, three web-based instructional…

  6. Bootstrapping in a Language of Thought: A Formal Model of Numerical Concept Learning

    ERIC Educational Resources Information Center

    Piantadosi, Steven T.; Tenenbaum, Joshua B.; Goodman, Noah D.

    2012-01-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful…

  7. Housing Survey. Campus Housing: Finding the Balance

    ERIC Educational Resources Information Center

    O'Connor, Shannon

    2016-01-01

    Depending on where you look for statistics, the number of students enrolling in colleges or universities is increasing, decreasing or remaining the about the same. Regardless of those trends, campus housing is a marketing tool for institutions looking to draw students to and keep them on campus. Schools need to offer sufficient beds and…

  8. Telehealth Consultation in a Self-Contained Classroom for Behavior: A Pilot Study

    ERIC Educational Resources Information Center

    Knowles, Christen; Massar, Michelle; Raulston, Tracy Jane; Machalicek, Wendy

    2017-01-01

    Students with challenging behavior severe enough to warrant placement in a self-contained special education classroom statistically have poor school and post-school outcomes compared to typical peers. Teachers in these classrooms often lack sufficient training to meet student needs. This pilot study investigated the use of a telehealth…

  9. The Comic Book Project: Forging Alternative Pathways to Literacy

    ERIC Educational Resources Information Center

    Bitz, Michael

    2004-01-01

    Many deep-rooted problems in urban areas of the United States--including crime, poverty, and poor health--correlate with illiteracy. The statistics reported by organizations such as the National Alliance for Urban Literacy Coalitions are telling. Urban citizens who cannot read sufficiently are at a clear disadvantage in life. They are more likely…

  10. Chemical-agnostic hazard prediction: statistical inference of in vitro toxicity pathways from proteomics responses to chemical mixtures

    EPA Science Inventory

    Toxicity pathways have been defined as normal cellular pathways that, when sufficiently perturbed as a consequence of chemical exposure, lead to an adverse outcome. If an exposure alters one or more normal biological pathways to an extent that leads to an adverse toxicity outcome...

  11. The Importance of Physical Fitness versus Physical Activity for Coronary Artery Disease Risk Factors: A Cross-Sectional Analysis.

    ERIC Educational Resources Information Center

    Young, Deborah Rohm; Steinhardt, Mary A.

    1993-01-01

    This cross-sectional study examined relationships among physical fitness, physical activity, and risk factors for coronary artery disease (CAD) in male police officers. Data from screenings and physical fitness assessments indicated physical activity must be sufficient to influence fitness before obtaining statistically significant risk-reducing…

  12. Attention-Deficit/Hyperactivity Disorder Symptoms in Preschool Children: Examining Psychometric Properties Using Item Response Theory

    ERIC Educational Resources Information Center

    Purpura, David J.; Wilson, Shauna B.; Lonigan, Christopher J.

    2010-01-01

    Clear and empirically supported diagnostic symptoms are important for proper diagnosis and treatment of psychological disorders. Unfortunately, the symptoms of many disorders presented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) lack sufficient psychometric…

  13. From innervation density to tactile acuity: 1. Spatial representation.

    PubMed

    Brown, Paul B; Koerber, H Richard; Millecchia, Ronald

    2004-06-11

    We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.

  14. Assessment of credit risk based on fuzzy relations

    NASA Astrophysics Data System (ADS)

    Tsabadze, Teimuraz

    2017-06-01

    The purpose of this paper is to develop a new approach for an assessment of the credit risk to corporate borrowers. There are different models for borrowers' risk assessment. These models are divided into two groups: statistical and theoretical. When assessing the credit risk for corporate borrowers, statistical model is unacceptable due to the lack of sufficiently large history of defaults. At the same time, we cannot use some theoretical models due to the lack of stock exchange. In those cases, when studying a particular borrower given that statistical base does not exist, the decision-making process is always of expert nature. The paper describes a new approach that may be used in group decision-making. An example of the application of the proposed approach is given.

  15. The bag-of-frames approach: A not so sufficient model for urban soundscapes.

    PubMed

    Lagrange, Mathieu; Lafay, Grégoire; Défréville, Boris; Aucouturier, Jean-Julien

    2015-11-01

    The "bag-of-frames" (BOF) approach, which encodes audio signals as the long-term statistical distribution of short-term spectral features, is commonly regarded as an effective and sufficient way to represent environmental sound recordings (soundscapes). The present paper describes a conceptual replication of a use of the BOF approach in a seminal article using several other soundscape datasets, with results strongly questioning the adequacy of the BOF approach for the task. As demonstrated in this paper, the good accuracy originally reported with BOF likely resulted from a particularly permissive dataset with low within-class variability. Soundscape modeling, therefore, may not be the closed case it was once thought to be.

  16. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  17. Non-gaussian statistics of pencil beam surveys

    NASA Technical Reports Server (NTRS)

    Amendola, Luca

    1994-01-01

    We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.

  18. Six Guidelines for Interesting Research.

    PubMed

    Gray, Kurt; Wegner, Daniel M

    2013-09-01

    There are many guides on proper psychology, but far fewer on interesting psychology. This article presents six guidelines for interesting research. The first three-Phenomena First, Be Surprising, and Grandmothers, Not Scientists-suggest how to choose your research question; the last three-Be The Participant, Simple Statistics, and Powerful Beginnings-suggest how to answer your research question and offer perspectives on experimental design, statistical analysis, and effective communication. These guidelines serve as reminders that replicability is necessary but not sufficient for compelling psychological science. Interesting research considers subjective experience; it listens to the music of the human condition. © The Author(s) 2013.

  19. Whose statistical reasoning is facilitated by a causal structure intervention?

    PubMed

    McNair, Simon; Feeney, Aidan

    2015-02-01

    People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430-450, 2007) proposed that a causal Bayesian framework accounts for peoples' errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.

  20. Hypothesis-Testing Demands Trustworthy Data—A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy

    PubMed Central

    Krefeld-Schwalb, Antonia; Witte, Erich H.; Zenker, Frank

    2018-01-01

    In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis. PMID:29740363

  1. Hypothesis-Testing Demands Trustworthy Data-A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy.

    PubMed

    Krefeld-Schwalb, Antonia; Witte, Erich H; Zenker, Frank

    2018-01-01

    In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H 0 -hypothesis to a statistical H 1 -verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a "pure" Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.

  2. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  3. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  4. Designing a Qualitative Data Collection Strategy (QDCS) for Africa - Phase 1: A Gap Analysis of Existing Models, Simulations, and Tools Relating to Africa

    DTIC Science & Technology

    2012-06-01

    generalized behavioral model characterized after the fictional Seldon equations (the one elaborated upon by Isaac Asimov in the 1951 novel, The...Foundation). Asimov described the Seldon equations as essentially statistical models with historical data of a sufficient size and variability that they

  5. A Statistical Portrait of Well-Being in Early Adulthood. CrossCurrents. Issue 2. Publication # 2004-18

    ERIC Educational Resources Information Center

    Brown, Brett V.; Moore, Kristin A.; Bzostek, Sharon

    2004-01-01

    In this data brief, key characteristics of young adults in the United States at or around age 25 are described. These characteristics include: (1) educational attainment and financial self sufficiency; (2) health behaviors and family formation;and (3) civic involvement. In addition, separate descriptive portraits for the major racial groups and…

  6. STEM Attrition: College Students' Paths into and out of STEM Fields. Statistical Analysis Report. NCES 2014-001

    ERIC Educational Resources Information Center

    Chen, Xianglei

    2013-01-01

    Producing sufficient numbers of graduates who are prepared for science, technology, engineering, and mathematics (STEM) occupations has become a national priority in the United States. To attain this goal, some policymakers have targeted reducing STEM attrition in college, arguing that retaining more students in STEM fields in college is a…

  7. Phenotype profiling and multivariate statistical analysis of Spur-pruning type Grapevine in National Clonal Germplasm Repository (NCGR, Davis)

    USDA-ARS?s Scientific Manuscript database

    Most Korean vineyards employed spur-pruning type modified-T trellis system. This produce system is suitable to spur-pruning type cultivars. But most European table grape is not adaptable to this produce system because their fruitfulness is sufficient to cane-pruning type system. Total 20 of fruit ch...

  8. A Meta-Analysis of Suggestopedia, Suggestology, Suggestive-accelerative Learning and Teaching (SALT), and Super-learning.

    ERIC Educational Resources Information Center

    Moon, Charles E.; And Others

    Forty studies using one or more components of Lozanov's method of suggestive-accelerative learning and teaching were identified from a search of all issues of the "Journal of Suggestive-Accelerative Learning and Teaching." Fourteen studies contained sufficient statistics to compute effect sizes. The studies were coded according to substantive and…

  9. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  10. Market structure in U.S. southern pine roundwood

    Treesearch

    Matthew F. Bingham; Jeffrey P. Prestemon; Douglas J. MacNair; Robert C. Abt

    2003-01-01

    Time series of commodity prices from multiple locations can behave as if responding to forces of spatial arbitrage. cvcn while such prices may instead be responding similarly to common factors aside from spatial arbitrage. Hence, while the Law of One Price may hold as a statistical concept, its acceptance is not sufficient to conclude market integration. We tested...

  11. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  12. The 90-day report for SL4 experiment S019: UV stellar astronomy

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The use of Experiment S019 to obtain moderate dispersion stellar spectra extending down to 1300A with sufficient spectral resolution to permit the study of ultraviolet (UV) line spectra and of spectral energy distributions of early-type stars is studied. Data obtained from this experiment should be of sufficient accuracy to permit detailed physical analysis of individual stars and nebulae, but an even more basic consideration is the expectation of obtaining spectra of a sufficient number of stars so that a statistically meaningful survey may be made of the UV spectra of a wide variety of star types. These should include all luminosity classes of spectral types O, B and A, as well as peculiar stars such as Wolf-Rayet stars and Ap or Am stars. An attempt was also made to obtain, in the no-prism mode, low dispersion UV spectra in a number of Milky Way star fields and in nearby galaxies.

  13. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  14. High Variability in Cellular Stoichiometry of Carbon, Nitrogen, and Phosphorus Within Classes of Marine Eukaryotic Phytoplankton Under Sufficient Nutrient Conditions.

    PubMed

    Garcia, Nathan S; Sexton, Julie; Riggins, Tracey; Brown, Jeff; Lomas, Michael W; Martiny, Adam C

    2018-01-01

    Current hypotheses suggest that cellular elemental stoichiometry of marine eukaryotic phytoplankton such as the ratios of cellular carbon:nitrogen:phosphorus (C:N:P) vary between phylogenetic groups. To investigate how phylogenetic structure, cell volume, growth rate, and temperature interact to affect the cellular elemental stoichiometry of marine eukaryotic phytoplankton, we examined the C:N:P composition in 30 isolates across 7 classes of marine phytoplankton that were grown with a sufficient supply of nutrients and nitrate as the nitrogen source. The isolates covered a wide range in cell volume (5 orders of magnitude), growth rate (<0.01-0.9 d -1 ), and habitat temperature (2-24°C). Our analysis indicates that C:N:P is highly variable, with statistical model residuals accounting for over half of the total variance and no relationship between phylogeny and elemental stoichiometry. Furthermore, our data indicated that variability in C:P, N:P, and C:N within Bacillariophyceae (diatoms) was as high as that among all of the isolates that we examined. In addition, a linear statistical model identified a positive relationship between diatom cell volume and C:P and N:P. Among all of the isolates that we examined, the statistical model identified temperature as a significant factor, consistent with the temperature-dependent translation efficiency model, but temperature only explained 5% of the total statistical model variance. While some of our results support data from previous field studies, the high variability of elemental ratios within Bacillariophyceae contradicts previous work that suggests that this cosmopolitan group of microalgae has consistently low C:P and N:P ratios in comparison with other groups.

  15. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  16. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo.

    PubMed

    Sioen, Giles Bruno; Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-07-10

    Background : Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods : We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo's 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results : The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions : This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted according to the time of year the disaster takes place to meet nutrient requirements in periods of low self-sufficiency and prevent gastrointestinal symptoms and cardiovascular diseases among survivors.

  17. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo

    PubMed Central

    Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-01-01

    Background: Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods: We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo’s 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results: The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions: This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted according to the time of year the disaster takes place to meet nutrient requirements in periods of low self-sufficiency and prevent gastrointestinal symptoms and cardiovascular diseases among survivors. PMID:28698515

  18. Classicality condition on a system observable in a quantum measurement and a relative-entropy conservation law

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui; Ueda, Masahito

    2015-03-01

    We consider the information flow on a system observable X corresponding to a positive-operator-valued measure under a quantum measurement process Y described by a completely positive instrument from the viewpoint of the relative entropy. We establish a sufficient condition for the relative-entropy conservation law which states that the average decrease in the relative entropy of the system observable X equals the relative entropy of the measurement outcome of Y , i.e., the information gain due to measurement. This sufficient condition is interpreted as an assumption of classicality in the sense that there exists a sufficient statistic in a joint successive measurement of Y followed by X such that the probability distribution of the statistic coincides with that of a single measurement of X for the premeasurement state. We show that in the case when X is a discrete projection-valued measure and Y is discrete, the classicality condition is equivalent to the relative-entropy conservation for arbitrary states. The general theory on the relative-entropy conservation is applied to typical quantum measurement models, namely, quantum nondemolition measurement, destructive sharp measurements on two-level systems, a photon counting, a quantum counting, homodyne and heterodyne measurements. These examples except for the nondemolition and photon-counting measurements do not satisfy the known Shannon-entropy conservation law proposed by Ban [M. Ban, J. Phys. A: Math. Gen. 32, 1643 (1999), 10.1088/0305-4470/32/9/012], implying that our approach based on the relative entropy is applicable to a wider class of quantum measurements.

  19. Annual modulation of seismicity along the San Andreas Fault near Parkfield, CA

    USGS Publications Warehouse

    Christiansen, L.B.; Hurwitz, S.; Ingebritsen, S.E.

    2007-01-01

    We analyze seismic data from the San Andreas Fault (SAF) near Parkfield, California, to test for annual modulation in seismicity rates. We use statistical analyses to show that seismicity is modulated with an annual period in the creeping section of the fault and a semiannual period in the locked section of the fault. Although the exact mechanism for seasonal triggering is undetermined, it appears that stresses associated with the hydrologic cycle are sufficient to fracture critically stressed rocks either through pore-pressure diffusion or crustal loading/ unloading. These results shed additional light on the state of stress along the SAF, indicating that hydrologically induced stress perturbations of ???2 kPa may be sufficient to trigger earthquakes.

  20. The Content of Statistical Requirements for Authors in Biomedical Research Journals

    PubMed Central

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-01-01

    Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible. PMID:27748343

  1. The Content of Statistical Requirements for Authors in Biomedical Research Journals.

    PubMed

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-10-20

    Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible.

  2. Accuracy assessment of maps of forest condition: Statistical design and methodological considerations [Chapter 5

    Treesearch

    Raymond L. Czaplewski

    2003-01-01

    No thematic map is perfect. Some pixels or polygons are not accurately classified, no matter how well the map is crafted. Therefore, thematic maps need metadata that sufficiently characterize the nature and degree of these imperfections. To decision-makers, an accuracy assessment helps judge the risks of using imperfect geospatial data. To analysts, an accuracy...

  3. Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter

    DTIC Science & Technology

    2008-05-20

    Machine learning techniques are often used for decision making in security critical applications such as intrusion detection and spam filtering...filter. The defenses shown in this thesis are able to work against the attacks developed against SpamBayes and are sufficiently generic to be easily extended into other statistical machine learning algorithms.

  4. Integration of Advanced Statistical Analysis Tools and Geophysical Modeling

    DTIC Science & Technology

    2012-08-01

    Carin Duke University Douglas Oldenburg University of British Columbia Stephen Billings Leonard Pasion Laurens Beran Sky Research...data processing for UXO discrimination is the time (or frequency) dependent dipole model (Bell and Barrow (2001), Pasion and Oldenburg (2001), Zhang...described by a bimodal distribution (i.e. two Gaussians, see Pasion (2007)). Data features are nonetheless useful when data quality is not sufficient

  5. Technological challenges for hydrocarbon production in the Barents Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gudmestad, O.T.; Strass, P.

    1995-02-01

    Technological challenges for hydrocarbon production in the Barents Sea relate mainly to the climatic conditions (ice and icebergs), to the relatively deep water of the area, and to the distance to the market for transportation of gas. It is suggested that environmental conditions must be carefully mapped over a sufficiently long period to get reliable statistics for the area.

  6. Simultaneous Use of Multiple Answer Copying Indexes to Improve Detection Rates

    ERIC Educational Resources Information Center

    Wollack, James A.

    2006-01-01

    Many of the currently available statistical indexes to detect answer copying lack sufficient power at small [alpha] levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was…

  7. Publication Bias in "Red, Rank, and Romance in Women Viewing Men," by Elliot et al. (2010)

    ERIC Educational Resources Information Center

    Francis, Gregory

    2013-01-01

    Elliot et al. (2010) reported multiple experimental findings that the color red modified women's ratings of attractiveness, sexual desirability, and status of a photographed man. An analysis of the reported statistics of these studies indicates that the experiments lack sufficient power to support these claims. Given the power of the experiments,…

  8. Invariant target detection by a correlation radiometer

    NASA Astrophysics Data System (ADS)

    Murza, L. P.

    1986-12-01

    The paper is concerned with the problem of the optimal detection of a heat-emitting target by a two-channel radiometer with an unstable amplification circuit. An expression is obtained for an asymptotically sufficient detection statistic which is invariant to changes in the amplification coefficients of the channels. The algorithm proposed here can be implemented numerically using a relatively simple program.

  9. Experimental research on mathematical modelling and unconventional control of clinker kiln in cement plants

    NASA Astrophysics Data System (ADS)

    Rusu-Anghel, S.

    2017-01-01

    Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.

  10. Statistical machine translation for biomedical text: are we there yet?

    PubMed

    Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre

    2011-01-01

    In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.

  11. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  12. Educational quality and the crisis of educational research

    NASA Astrophysics Data System (ADS)

    Heyneman, Stephen

    1993-11-01

    This paper was designed not as a research product but as a speech to comparative education colleagues. It argues that there is a crisis of educational quality in many parts of the world, and that there is a parallel crisis in the quality of educational research and statistics. Compared to other major public responsibilities in health, agriculture, population and family planning, educational statistics are poor and often getting worse. Our international and national statistical institutions are impoverished, and we as a profession have been part of the problem. We have been so busy arguing over differing research paradigms that we have not paid sufficient attention to our common professional responsibilities and common professional goals. The paper suggests that we, as professionals interested in comparative education issues, begin to act together more on these common and important issues.

  13. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  14. Lagrangian statistics of turbulent dispersion from 81923 direct numerical simulation of isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Buaria, Dhawal; Yeung, P. K.; Sawford, B. L.

    2016-11-01

    An efficient massively parallel algorithm has allowed us to obtain the trajectories of 300 million fluid particles in an 81923 simulation of isotropic turbulence at Taylor-scale Reynolds number 1300. Conditional single-particle statistics are used to investigate the effect of extreme events in dissipation and enstrophy on turbulent dispersion. The statistics of pairs and tetrads, both forward and backward in time, are obtained via post-processing of single-particle trajectories. For tetrads, since memory of shape is known to be short, we focus, for convenience, on samples which are initially regular, with all sides of comparable length. The statistics of tetrad size show similar behavior as the two-particle relative dispersion, i.e., stronger backward dispersion at intermediate times with larger backward Richardson constant. In contrast, the statistics of tetrad shape show more robust inertial range scaling, in both forward and backward frames. However, the distortion of shape is stronger for backward dispersion. Our results suggest that the Reynolds number reached in this work is sufficient to settle some long-standing questions concerning Lagrangian scale similarity. Supported by NSF Grants CBET-1235906 and ACI-1036170.

  15. Conformity and statistical tolerancing

    NASA Astrophysics Data System (ADS)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  16. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  17. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  18. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand

    PubMed Central

    Piaseu, Noppawan

    2017-01-01

    BACKGROUND/OBJECTIVES Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. SUBJECTS/METHODS This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. RESULTS The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable (P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. CONCLUSIONS The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools. PMID:28386386

  19. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand.

    PubMed

    Hong, Seo Ah; Piaseu, Noppawan

    2017-04-01

    Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable ( P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools.

  20. Tibial Bowing and Pseudarthrosis in Neurofibromatosis Type 1

    DTIC Science & Technology

    2015-01-01

    controlling for age and sex was used. However, there were no statistically significant differences between NF1 individuals with and without tibial...Dinorah Friedmann-Morvinski (The Salk Institute) presented a different model of glioblastoma in which tumors were induced from fully differentiated...a driver of Schwann cell tumorigenesis. Induction ofWnt signaling was sufficient to induce a transformed phenotype in human Schwann cells, while

  1. The study of natural reproduction on burned forest areas

    Treesearch

    J. A. Larsen

    1928-01-01

    It is not necessary herein to quote statistics on the areas and values of timberland destroyed each year in the United States. The losses are sufficiently large to attract attention and to present problems in forest management as well as in forest research. The situation is here and every forester must meet it, be he manager or investigator. This paper is an attempt to...

  2. Microstructure-Sensitive HCF and VHCF Simulations (Preprint)

    DTIC Science & Technology

    2012-08-01

    microplasticity ) on driving formation of cracks, either transgranular along slip bands or intergranular due to progressive slip impingement, as shown in Figure...cycles, such as shafts, bearings, and gears, for example, should focus on extreme value statistics of potential sites for microplastic strain...is the absence of microplasticity within grains sufficient to nucleate cracks or to drive growth of micron- scale embryonic cracks within individual

  3. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  4. Sustaining food self-sufficiency of a nation: The case of Sri Lankan rice production and related water and fertilizer demands.

    PubMed

    Davis, Kyle Frankel; Gephart, Jessica A; Gunda, Thushara

    2016-04-01

    Rising human demand and climatic variability have created greater uncertainty regarding global food trade and its effects on the food security of nations. To reduce reliance on imported food, many countries have focused on increasing their domestic food production in recent years. With clear goals for the complete self-sufficiency of rice production, Sri Lanka provides an ideal case study for examining the projected growth in domestic rice supply, how this compares to future national demand, and what the associated impacts from water and fertilizer demands may be. Using national rice statistics and estimates of intensification, this study finds that improvements in rice production can feed 25.3 million Sri Lankans (compared to a projected population of 23.8 million people) by 2050. However, to achieve this growth, consumptive water use and nitrogen fertilizer application may need to increase by as much as 69 and 23 %, respectively. This assessment demonstrates that targets for maintaining self-sufficiency should better incorporate avenues for improving resource use efficiency.

  5. Statistics of work performed on a forced quantum oscillator.

    PubMed

    Talkner, Peter; Burada, P Sekhar; Hänggi, Peter

    2008-07-01

    Various aspects of the statistics of work performed by an external classical force on a quantum mechanical system are elucidated for a driven harmonic oscillator. In this special case two parameters are introduced that are sufficient to completely characterize the force protocol. Explicit results for the characteristic function of work and the corresponding probability distribution are provided and discussed for three different types of initial states of the oscillator: microcanonical, canonical, and coherent states. Depending on the choice of the initial state the probability distributions of the performed work may greatly differ. This result in particular also holds true for identical force protocols. General fluctuation and work theorems holding for microcanonical and canonical initial states are confirmed.

  6. Statistical foundations of liquid-crystal theory

    PubMed Central

    Seguin, Brian; Fried, Eliot

    2013-01-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals. PMID:23772091

  7. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    PubMed

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. [How reliable is the monitoring for doping?].

    PubMed

    Hüsler, J

    1990-12-01

    The reliability of the dope control, of the chemical analysis of the urine probes in the accredited laboratories and their decisions, is discussed using probabilistic and statistical methods. Basically, we evaluated and estimated the positive predictive value which means the probability that an urine probe contains prohibited dope substances given a positive test decision. Since there are not statistical data and evidence for some important quantities in relation to the predictive value, an exact evaluation is not possible, only conservative, lower bounds can be given. We found that the predictive value is at least 90% or 95% with respect to the analysis and decision based on the A-probe only, and at least 99% with respect to both A- and B-probes. A more realistic observation, but without sufficient statistical confidence, points to the fact that the true predictive value is significantly larger than these lower estimates.

  9. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  10. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  11. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  12. Application of the Socio-Ecological Model to predict physical activity behaviour among Nigerian University students.

    PubMed

    Essiet, Inimfon Aniema; Baharom, Anisah; Shahar, Hayati Kadir; Uzochukwu, Benjamin

    2017-01-01

    Physical activity among university students is a catalyst for habitual physical activity in adulthood. Physical activity has many health benefits besides the improvement in academic performance. The present study assessed the predictors of physical activity among Nigerian university students using the Social Ecological Model (SEM). This cross-sectional study recruited first-year undergraduate students in the University of Uyo, Nigeria by multistage sampling. The International Physical Activity Questionnaire (IPAQ) short-version was used to assess physical activity in the study. Factors were categorised according to the Socio-Ecological Model which consisted of individual, social environment, physical environment and policy level. Data was analysed using the IBM SPSS statistical software, version 22. Simple and multiple logistic regression were used to determine the predictors of sufficient physical activity. A total of 342 respondents completed the study questionnaire. Majority of the respondents (93.6%) reported sufficient physical activity at 7-day recall. Multivariate analysis revealed that respondents belonging to the Ibibio ethnic group were about four times more likely to be sufficiently active compared to those who belonged to the other ethnic groups (AOR = 3.725, 95% CI = 1.383 to 10.032). Also, participants who had a normal weight were about four times more likely to be physically active compared to those who were underweight (AOR = 4.268, 95% CI = 1.323 to 13.772). This study concluded that there was sufficient physical activity levels among respondents. It is suggested that emphasis be given to implementing interventions aimed at sustaining sufficient levels of physical activity among students.

  13. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  14. Information-Based Approach to Unsupervised Machine Learning

    DTIC Science & Technology

    2013-06-19

    Leibler , R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86. Minka, T. P. (2000). Old and new matrix algebra use ...and Arabie, P. Comparing partitions. Journal of Classification, 2(1):193–218, 1985. Kullback , S. and Leibler , R. A. On information and suf- ficiency...the test input density to a lin- ear combination of class-wise input distributions under the Kullback - Leibler (KL) divergence ( Kullback

  15. Orbit-Attitude Changes of Objects in Near Earth Space Induced by Natural Charging

    DTIC Science & Technology

    2017-05-02

    depends upon Earth’s magnetosphere. Typically, magneto-sphere models can be grouped under two classes: statistical and physics -based. The Physics ...models were primarily physics -based due to unavailability of sufficient space-data, but over the last three decades, with the availability of huge...Attitude Determination and Control,” Astrophysics and Space Sci- ence Library, Vol. 73, D. Reidel Publishing Company, London, 1978 [17] Fairfield

  16. Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations

    DTIC Science & Technology

    2006-01-01

    such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses

  17. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  18. Alignment-free sequence comparison (II): theoretical power of comparison statistics.

    PubMed

    Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S

    2010-11-01

    Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.

  19. A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.

    PubMed

    Luczak, Brian B; James, Benjamin T; Girgis, Hani Z

    2017-12-06

    Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.

  20. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  1. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  2. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  3. SDGs and Geospatial Frameworks: Data Integration in the United States

    NASA Astrophysics Data System (ADS)

    Trainor, T.

    2016-12-01

    Responding to the need to monitor a nation's progress towards meeting the Sustainable Development Goals (SDG) outlined in the 2030 U.N. Agenda requires the integration of earth observations with statistical information. The urban agenda proposed in SDG 11 challenges the global community to find a geospatial approach to monitor and measure inclusive, safe, resilient, and sustainable cities and communities. Target 11.7 identifies public safety, accessibility to green and public spaces, and the most vulnerable populations (i.e., women and children, older persons, and persons with disabilities) as the most important priorities of this goal. A challenge for both national statistical organizations and earth observation agencies in addressing SDG 11 is the requirement for detailed statistics at a sufficient spatial resolution to provide the basis for meaningful analysis of the urban population and city environments. Using an example for the city of Pittsburgh, this presentation proposes data and methods to illustrate how earth science and statistical data can be integrated to respond to Target 11.7. Finally, a preliminary series of data initiatives are proposed for extending this method to other global cities.

  4. Automatic Classification of Medical Text: The Influence of Publication Form1

    PubMed Central

    Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.

    1988-01-01

    Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.

  5. Using protein-protein interactions for refining gene networks estimated from microarray data by Bayesian networks.

    PubMed

    Nariai, N; Kim, S; Imoto, S; Miyano, S

    2004-01-01

    We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.

  6. An Integrative Account of Constraints on Cross-Situational Learning

    PubMed Central

    Yurovsky, Daniel; Frank, Michael C.

    2015-01-01

    Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052

  7. Statistical foundations of liquid-crystal theory: I. Discrete systems of rod-like molecules.

    PubMed

    Seguin, Brian; Fried, Eliot

    2012-12-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals.

  8. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  9. Quantum gas-liquid condensation in an attractive Bose gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koh, Shun-ichiro

    Gas-liquid condensation (GLC) in an attractive Bose gas is studied on the basis of statistical mechanics. Using some results in combinatorial mathematics, the following are derived. (1) With decreasing temperature, the Bose-statistical coherence grows in the many-body wave function, which gives rise to the divergence of the grand partition function prior to Bose-Einstein condensation. It is a quantum-mechanical analogue to the GLC in a classical gas (quantum GLC). (2) This GLC is triggered by the bosons with zero momentum. Compared with the classical GLC, an incomparably weaker attractive force creates it. For the system showing the quantum GLC, we discussmore » a cold helium 4 gas at sufficiently low pressure.« less

  10. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  11. Alternative Matching Scores to Control Type I Error of the Mantel-Haenszel Procedure for DIF in Dichotomously Scored Items Conforming to 3PL IRT and Nonparametric 4PBCB Models

    ERIC Educational Resources Information Center

    Monahan, Patrick O.; Ankenmann, Robert D.

    2010-01-01

    When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…

  12. Unsupervised Topic Discovery by Anomaly Detection

    DTIC Science & Technology

    2013-09-01

    Kullback , and R. A. Leibler , “On information and sufficiency,” Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951. [14] S. Basu, A...read known publicly. There is a strong interest in the analysis of these opinions and comments as they provide useful information about the sentiments...them as topics. The difficulty in this approach is finding a good set of keywords that accurately represents the documents. The method used to

  13. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  14. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  15. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children?

    PubMed

    Pourkazemi, Maryam; Erfanparast, Leila; Sheykhgermchi, Sanaz; Ghanizadeh, Milad

    2017-01-01

    Pain control is one of the most important aspects of behavior management in children. The most common way to achieve pain control is by using local anesthetics (LA). Many studies describe that the buccal nerve innervates the buccal gingiva and mucosa of the mandible for a variable extent from the vicinity of the lower third molar to the lower canine. Regarding the importance of appropriate and complete LA in child-behavior control, in this study, we examined the frequency of buccal gingiva anesthesia of primary mandibular molars and canine after inferior alveolar nerve block injection in 4- to 6-year-old children. In this descriptive cross-sectional study, 220 4- to 6-year-old children were randomly selected and entered into the study. Inferior alveolar nerve block was injected with the same method and standards for all children, and after ensuring the success of block injection, anesthesia of buccal mucosa of primary molars and canine was examined by stick test and reaction of child using sound, eye, motor (SEM) scale. The data from the study were analyzed using descriptive statistics and statistical software Statistical Package for the Social Sciences (SPSS) version 21. The area that was the highest nonanesthetized was recorded as in the distobuccal of the second primary molars. The area of the lowest nonanesthesia was also reported in the gingiva of primary canine tooth. According to this study, in 15 to 30% of cases, after inferior alveolar nerve block injection, the primary mandibular molars' buccal mucosa is not anesthetized. How to cite this article: Pourkazemi M, Erfanparast L, Sheykhgermchi S, Ghanizadeh M. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children? Int J Clin Pediatr Dent 2017;10(4):369-372.

  16. Therapeutic whole-body hypothermia reduces mortality in severe traumatic brain injury if the cooling index is sufficiently high: meta-analyses of the effect of single cooling parameters and their integrated measure.

    PubMed

    Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras

    2018-04-21

    Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.

  17. Comparative morphometric analysis of 5 interpositional arterial autograft options for adult living donor liver transplantation.

    PubMed

    Imakuma, E S; Bordini, A L; Millan, L S; Massarollo, P C B; Caldini, E T E G

    2014-01-01

    In living donor liver transplantation, the right-sided graft presents thin and short vessels, bringing forward a more difficult anastomosis. In these cases, an interpositional arterial autograft can be used to favor the performance of the arterial anastomosis, making the procedure easier and avoiding surgical complications. We compared the inferior mesenteric artery (IMA), the splenic artery (SA), the inferior epigastric artery (IEA), the descending branch of the lateral circumflex femoral artery (LCFA), and the proper hepatic artery (PHA) as options for interpositional autograft in living donor liver transplantation. Segments of at least 3 cm of all 5 arteries were harvested from 16 fresh adult cadavers from both genders through standardized dissection. The analyzed measures were proximal and distal diameter and length. The proximal diameter of the RHA and the distal diameter of the SA, IMA, IEA and the LCFA were compared to the distal diameter of the RHA. The proximal and distal diameters of the SA, IEA and LCFA were compared to study caliber gain of each artery. All arteries except the IMA showed statistical significant difference in relation to the RHA in terms of diameter. Regarding caliber gain, the arteries demonstrated statistical significant difference. All the harvested arteries except PHA were 3 cm in length. The IMA demonstrated the best compatibility with the RHA in terms of diameter and showed sufficient length to be employed as interpositional graft. The PHA, the SA, the IEA and the LCFA presented statistically significant different diameters when compared to the RHA. Among these vessels, only the PHA did not show sufficient mean length. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Statistical analyses of the relative risk.

    PubMed Central

    Gart, J J

    1979-01-01

    Let P1 be the probability of a disease in one population and P2 be the probability of a disease in a second population. The ratio of these quantities, R = P1/P2, is termed the relative risk. We consider first the analyses of the relative risk from retrospective studies. The relation between the relative risk and the odds ratio (or cross-product ratio) is developed. The odds ratio can be considered a parameter of an exponential model possessing sufficient statistics. This permits the development of exact significance tests and confidence intervals in the conditional space. Unconditional tests and intervals are also considered briefly. The consequences of misclassification errors and ignoring matching or stratifying are also considered. The various methods are extended to combination of results over the strata. Examples of case-control studies testing the association between HL-A frequencies and cancer illustrate the techniques. The parallel analyses of prospective studies are given. If P1 and P2 are small with large samples sizes the appropriate model is a Poisson distribution. This yields a exponential model with sufficient statistics. Exact conditional tests and confidence intervals can then be developed. Here we consider the case where two populations are compared adjusting for sex differences as well as for the strata (or covariate) differences such as age. The methods are applied to two examples: (1) testing in the two sexes the ratio of relative risks of skin cancer in people living in different latitudes, and (2) testing over time the ratio of the relative risks of cancer in two cities, one of which fluoridated its drinking water and one which did not. PMID:540589

  19. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  20. A robust signalling system for land mobile satellite services

    NASA Technical Reports Server (NTRS)

    Irish, Dale; Shmith, Gary; Hart, Nick; Wines, Marie

    1989-01-01

    Presented here is a signalling system optimized to ensure expedient call set-up for satellite telephony services in a land mobile environment. In a land mobile environment, the satellite to mobile link is subject to impairments from multipath and shadowing phenomena, which result in signal amplitude and phase variations. Multipath, caused by signal scattering and reflections, results in sufficient link margin to compensate for these variations. Direct signal attenuation caused by shadowing due to buildings and vegetation may result in attenuation values in excess of 10 dB and commonly up to 20 dB. It is not practical to provide a link with sufficient margin to enable communication when the signal is blocked. When a moving vehicle passes these obstacles, the link will experience rapid changes in signal strength due to shadowing. Using statistical models of attenuation as a function of distance travelled, a communication strategy has been defined for the land mobile environment.

  1. Direct measurement of the biphoton Wigner function through two-photon interference

    PubMed Central

    Douce, T.; Eckstein, A.; Walborn, S. P.; Khoury, A. Z.; Ducci, S.; Keller, A.; Coudreau, T.; Milman, P.

    2013-01-01

    The Hong-Ou-Mandel (HOM) experiment was a benchmark in quantum optics, evidencing the non–classical nature of photon pairs, later generalized to quantum systems with either bosonic or fermionic statistics. We show that a simple modification in the well-known and widely used HOM experiment provides the direct measurement of the Wigner function. We apply our results to one of the most reliable quantum systems, consisting of biphotons generated by parametric down conversion. A consequence of our results is that a negative value of the Wigner function is a sufficient condition for non-gaussian entanglement between two photons. In the general case, the Wigner function provides all the required information to infer entanglement using well known necessary and sufficient criteria. The present work offers a new vision of the HOM experiment that further develops its possibilities to realize fundamental tests of quantum mechanics using simple optical set-ups. PMID:24346262

  2. AOP: An R Package For Sufficient Causal Analysis in Pathway ...

    EPA Pesticide Factsheets

    Summary: How can I quickly find the key events in a pathway that I need to monitor to predict that a/an beneficial/adverse event/outcome will occur? This is a key question when using signaling pathways for drug/chemical screening in pharma-cology, toxicology and risk assessment. By identifying these sufficient causal key events, we have fewer events to monitor for a pathway, thereby decreasing assay costs and time, while maximizing the value of the information. I have developed the “aop” package which uses backdoor analysis of causal net-works to identify these minimal sets of key events that are suf-ficient for making causal predictions. Availability and Implementation: The source and binary are available online through the Bioconductor project (http://www.bioconductor.org/) as an R package titled “aop”. The R/Bioconductor package runs within the R statistical envi-ronment. The package has functions that can take pathways (as directed graphs) formatted as a Cytoscape JSON file as input, or pathways can be represented as directed graphs us-ing the R/Bioconductor “graph” package. The “aop” package has functions that can perform backdoor analysis to identify the minimal set of key events for making causal predictions.Contact: burgoon.lyle@epa.gov This paper describes an R/Bioconductor package that was developed to facilitate the identification of key events within an AOP that are the minimal set of sufficient key events that need to be tested/monit

  3. Impact of sequencing depth in ChIP-seq experiments

    PubMed Central

    Jung, Youngsook L.; Luquette, Lovelace J.; Ho, Joshua W.K.; Ferrari, Francesco; Tolstorukov, Michael; Minoda, Aki; Issner, Robbyn; Epstein, Charles B.; Karpen, Gary H.; Kuroda, Mitzi I.; Park, Peter J.

    2014-01-01

    In a chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) experiment, an important consideration in experimental design is the minimum number of sequenced reads required to obtain statistically significant results. We present an extensive evaluation of the impact of sequencing depth on identification of enriched regions for key histone modifications (H3K4me3, H3K36me3, H3K27me3 and H3K9me2/me3) using deep-sequenced datasets in human and fly. We propose to define sufficient sequencing depth as the number of reads at which detected enrichment regions increase <1% for an additional million reads. Although the required depth depends on the nature of the mark and the state of the cell in each experiment, we observe that sufficient depth is often reached at <20 million reads for fly. For human, there are no clear saturation points for the examined datasets, but our analysis suggests 40–50 million reads as a practical minimum for most marks. We also devise a mathematical model to estimate the sufficient depth and total genomic coverage of a mark. Lastly, we find that the five algorithms tested do not agree well for broad enrichment profiles, especially at lower depths. Our findings suggest that sufficient sequencing depth and an appropriate peak-calling algorithm are essential for ensuring robustness of conclusions derived from ChIP-seq data. PMID:24598259

  4. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  5. A Methodology for Conducting Space Utilization Studies within Department of Defense Medical Facilities

    DTIC Science & Technology

    1992-07-01

    database programs, such as dBase or Microsoft Excell, to yield statistical reports that can profile the health care facility . Ladeen (1989) feels that the...service specific space status report would be beneficial to the specific service(s) under study, it would not provide sufficient data for facility -wide...change in the Master Space Plan. The revised methodology also provides a mechanism and forum for spuce management education within the facility . The

  6. An Evaluation of a Modified Simulated Annealing Algorithm for Various Formulations

    DTIC Science & Technology

    1990-08-01

    trials of the K"h Markov chain, is sufficiently close to q(c, ), the stationary distribution at ck la (Lk,c,,) - q(c.) < epsilon Requiring the final...Wiley and Sons . Aarts, E. H. L., & Van Laarhoven, P. J. M. (1985). Statistical cooling: A general approach to combinatorial optimization problems...Birkhoff, G. (1946). Tres observaciones sobre el algebra lineal, Rev. Univ. Nac. TucumanSer. A, 5, 147-151. Bohr, Niels (1913). Old quantum theory

  7. What are the Real Risks of Knowing and Not Knowing - Leading Knowledge on Cyber

    DTIC Science & Technology

    2014-06-01

    Kullback S, & R. A., Leibler . (1951) On information and sufficiency. The Annals of Mathematical Statistics 1: pp. 79-86. [73] Schreiber T. (2000...The active gathering and capture of information (and data) for testing (abducting, inducting and deducting) through social exchange’ [4, 15-20...Further research by the first author led to considerations of Fitness and Finessing: On Fitness : ‘As a function of a systems ability to test its

  8. The annoyance caused by noise around airports

    NASA Technical Reports Server (NTRS)

    JOSSE

    1980-01-01

    A comprehensive study of noise around selected airports in France was performed. By use of questionnaires, the degree of annoyance caused by aircraft noise was determined. Three approaches used in the study were: (1) analytical study on the influence of noise on sleep; (2) sociological study on the satisfaction of occupants of buildings which conform to laws which are supposed to guarantee sufficient comfort; and (3) statistical study of correlations between external noises and psychological and pathological disturbances in residences.

  9. Temporal Variability and Statistics of the Strehl Ratio in Adaptive-Optics Images

    DTIC Science & Technology

    2010-01-01

    with the appropriate models and the residuals were extracted. This was done using the ARIMA modelling (Box & Jenkins 1970). ARIMA stands for...It was used here for the opposite goal – to obtain the values of the i.i.d. “noise” and test its distribution. Mixed ARIMA models of order 2 were...often sufficient to ensure non- significant autocorrelation of the residuals. Table 2 lists the stationary sequences with their respective ARIMA models

  10. Science Fairs for Science Literacy

    NASA Astrophysics Data System (ADS)

    Mackey, Katherine; Culbertson, Timothy

    2014-03-01

    Scientific discovery, technological revolutions, and complex global challenges are commonplace in the modern era. People are bombarded with news about climate change, pandemics, and genetically modified organisms, and scientific literacy has never been more important than in the present day. Yet only 29% of American adults have sufficient understanding to be able to read science stories reported in the popular press [Miller, 2010], and American students consistently rank below other nations in math and science [National Center for Education Statistics, 2012].

  11. Metabolic profiling of human lung cancer blood plasma using 1H NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Kokova, Daria; Dementeva, Natalia; Kotelnikov, Oleg; Ponomaryova, Anastasia; Cherdyntseva, Nadezhda; Kzhyshkowska, Juliya

    2017-11-01

    Lung cancer (both small cell and non-small cell) is the second most common cancer in both men and women. The article represents results of evaluating of the plasma metabolic profiles of 100 lung cancer patients and 100 controls to investigate significant metabolites using 400 MHz 1H NMR spectrometer. The results of multivariate statistical analysis show that a medium-field NMR spectrometer can obtain the data which are already sufficient for clinical metabolomics.

  12. Factor structure and dimensionality of the two depression scales in STAR*D using level 1 datasets.

    PubMed

    Bech, P; Fava, M; Trivedi, M H; Wisniewski, S R; Rush, A J

    2011-08-01

    The factor structure and dimensionality of the HAM-D(17) and the IDS-C(30) are as yet uncertain, because psychometric analyses of these scales have been performed without a clear separation between factor structure profile and dimensionality (total scores being a sufficient statistic). The first treatment step (Level 1) in the STAR*D study provided a dataset of 4041 outpatients with DSM-IV nonpsychotic major depression. The HAM-D(17) and IDS-C(30) were evaluated by principal component analysis (PCA) without rotation. Mokken analysis tested the unidimensionality of the IDS-C(6), which corresponds to the unidimensional HAM-D(6.) For both the HAM-D(17) and IDS-C(30), PCA identified a bi-directional factor contrasting the depressive symptoms versus the neurovegetative symptoms. The HAM-D(6) and the corresponding IDS-C(6) symptoms all emerged in the depression factor. Both the HAM-D(6) and IDS-C(6) were found to be unidimensional scales, i.e., their total scores are each a sufficient statistic for the measurement of depressive states. STAR*D used only one medication in Level 1. The unidimensional HAM-D(6) and IDS-C(6) should be used when evaluating the pure clinical effect of antidepressive treatment, whereas the multidimensional HAM-D(17) and IDS-C(30) should be considered when selecting antidepressant treatment. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Report on the search for atmospheric holes using airs image data

    NASA Technical Reports Server (NTRS)

    Reinleitner, Lee A.

    1991-01-01

    Frank et al (1986) presented a very controversial hypothesis which states that the Earth is being bombarded by water-vapor clouds resulting from the disruption and vaporization of small comets. This hypothesis was based on single-pixel intensity decreases in the images of the earth's dayglow emissions at vacuum-ultraviolet (VUV) wavelengths using the DE-1 imager. These dark spots, or atmospheric holes, are hypothesized to be the result of VUV absorption by a water-vapor cloud between the imager and the dayglow-emitting region. Examined here is the VUV data set from the Auroral Ionospheric Remote Sensor (AIRS) instrument that was flown on the Polar BEAR satellite. AIRS was uniquely situated to test this hypothesis. Due to the altitude of the sensor, the holes should show multi-pixel intensity decreases in a scan line. A statistical estimate indicated that sufficient 130.4-nm data from AIRS existed to detect eight to nine such holes, but none was detected. The probability of this occurring is less than 1.0 x 10(exp -4). A statistical estimate indicated that sufficient 135.6-nm data from AIRS existed to detect approx. 2 holes, and two ambiguous cases are shown. In spite of the two ambiguous cases, the 135.6-nm data did not show clear support for the small-comet hypothesis. The 130.4-nm data clearly do not support the small-comet hypothesis.

  14. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  15. Issues in the classification of disease instances with ontologies.

    PubMed

    Burgun, Anita; Bodenreider, Olivier; Jacquelinet, Christian

    2005-01-01

    Ontologies define classes of entities and their interrelations. They are used to organize data according to a theory of the domain. Towards that end, ontologies provide class definitions (i.e., the necessary and sufficient conditions for defining class membership). In medical ontologies, it is often difficult to establish such definitions for diseases. We use three examples (anemia, leukemia and schizophrenia) to illustrate the limitations of ontologies as classification resources. We show that eligibility criteria are often more useful than the Aristotelian definitions traditionally used in ontologies. Examples of eligibility criteria for diseases include complex predicates such as ' x is an instance of the class C when at least n criteria among m are verified' and 'symptoms must last at least one month if not treated, but less than one month, if effectively treated'. References to normality and abnormality are often found in disease definitions, but the operational definition of these references (i.e., the statistical and contextual information necessary to define them) is rarely provided. We conclude that knowledge bases that include probabilistic and statistical knowledge as well as rule-based criteria are more useful than Aristotelian definitions for representing the predicates defined by necessary and sufficient conditions. Rich knowledge bases are needed to clarify the relations between individuals and classes in various studies and applications. However, as ontologies represent relations among classes, they can play a supporting role in disease classification services built primarily on knowledge bases.

  16. Statistical mechanics and thermodynamic limit of self-gravitating fermions in D dimensions.

    PubMed

    Chavanis, Pierre-Henri

    2004-06-01

    We discuss the statistical mechanics of a system of self-gravitating fermions in a space of dimension D. We plot the caloric curves of the self-gravitating Fermi gas giving the temperature as a function of energy and investigate the nature of phase transitions as a function of the dimension of space. We consider stable states (global entropy maxima) as well as metastable states (local entropy maxima). We show that for D> or =4, there exists a critical temperature (for sufficiently large systems) and a critical energy below which the system cannot be found in statistical equilibrium. Therefore, for D> or =4, quantum mechanics cannot stabilize matter against gravitational collapse. This is similar to a result found by Ehrenfest (1917) at the atomic level for Coulomb forces. This makes the dimension D=3 of our Universe very particular with possible implications regarding the anthropic principle. Our study joins a long tradition of scientific and philosophical papers that examined how the dimension of space affects the laws of physics.

  17. Reversibility in Quantum Models of Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Gier, David; Crutchfield, James; Mahoney, John; James, Ryan

    Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.

  18. A methodology using in-chair movements as an objective measure of discomfort for the purpose of statistically distinguishing between similar seat surfaces.

    PubMed

    Cascioli, Vincenzo; Liu, Zhuofu; Heusch, Andrew; McCarthy, Peter W

    2016-05-01

    This study presents a method for objectively measuring in-chair movement (ICM) that shows correlation with subjective ratings of comfort and discomfort. Employing a cross-over controlled, single blind design, healthy young subjects (n = 21) sat for 18 min on each of the following surfaces: contoured foam, straight foam and wood. Force sensitive resistors attached to the sitting interface measured the relative movements of the subjects during sitting. The purpose of this study was to determine whether ICM could statistically distinguish between each seat material, including two with subtle design differences. In addition, this study investigated methodological considerations, in particular appropriate threshold selection and sitting duration, when analysing objective movement data. ICM appears to be able to statistically distinguish between similar foam surfaces, as long as appropriate ICM thresholds and sufficient sitting durations are present. A relationship between greater ICM and increased discomfort, and lesser ICM and increased comfort was also found. Copyright © 2016. Published by Elsevier Ltd.

  19. Statistical parity-time-symmetric lasing in an optical fibre network.

    PubMed

    Jahromi, Ali K; Hassan, Absar U; Christodoulides, Demetrios N; Abouraddy, Ayman F

    2017-11-07

    Parity-time (PT)-symmetry in optics is a condition whereby the real and imaginary parts of the refractive index across a photonic structure are deliberately balanced. This balance can lead to interesting optical phenomena, such as unidirectional invisibility, loss-induced lasing, single-mode lasing from multimode resonators, and non-reciprocal effects in conjunction with nonlinearities. Because PT-symmetry has been thought of as fragile, experimental realisations to date have been usually restricted to on-chip micro-devices. Here, we demonstrate that certain features of PT-symmetry are sufficiently robust to survive the statistical fluctuations associated with a macroscopic optical cavity. We examine the lasing dynamics in optical fibre-based coupled cavities more than a kilometre in length with balanced gain and loss. Although fluctuations can detune the cavity by more than the free spectral range, the behaviour of the lasing threshold and the laser power is that expected from a PT-stable system. Furthermore, we observe a statistical symmetry breaking upon varying the cavity loss.

  20. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  1. Predicting driving performance in older adults: we are not there yet!

    PubMed

    Bédard, Michel; Weaver, Bruce; Darzins, Peteris; Porter, Michelle M

    2008-08-01

    We set up this study to determine the predictive value of approaches for which a statistical association with driving performance has been documented. We determined the statistical association (magnitude of association and probability of occurrence by chance alone) between four different predictors (the Mini-Mental State Examination, Trails A test, Useful Field of View [UFOV], and a composite measure of past driving incidents) and driving performance. We then explored the predictive value of these measures with receiver operating characteristic (ROC) curves and various cutoff values. We identified associations between the predictors and driving performance well beyond the play of chance (p < .01). Nonetheless, the predictors had limited predictive value with areas under the curve ranging from .51 to .82. Statistical associations are not sufficient to infer adequate predictive value, especially when crucial decisions such as whether one can continue driving are at stake. The predictors we examined have limited predictive value if used as stand-alone screening tests.

  2. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  3. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  4. High Agreement and High Prevalence: The Paradox of Cohen's Kappa.

    PubMed

    Zec, Slavica; Soriani, Nicola; Comoretto, Rosanna; Baldi, Ileana

    2017-01-01

    Cohen's Kappa is the most used agreement statistic in literature. However, under certain conditions, it is affected by a paradox which returns biased estimates of the statistic itself. The aim of the study is to provide sufficient information which allows the reader to make an informed choice of the correct agreement measure, by underlining some optimal properties of Gwet's AC1 in comparison to Cohen's Kappa, using a real data example. During the process of literature review, we have asked a panel of three evaluators to come up with a judgment on the quality of 57 randomized controlled trials assigning a score to each trial using the Jadad scale. The quality was evaluated according to the following dimensions: adopted design, randomization unit, type of primary endpoint. With respect to each of the above described features, the agreement between the three evaluators has been calculated using Cohen's Kappa statistic and Gwet's AC1 statistic and, finally, the values have been compared with the observed agreement. The values of the Cohen's Kappa statistic would lead to believe that the agreement levels for the variables Unit, Design and Primary Endpoints are totally unsatisfactory. The AC1 statistic, on the contrary, shows plausible values which are in line with the respective values of the observed concordance. We conclude that it would always be appropriate to adopt the AC1 statistic, thus bypassing any risk of incurring the paradox and drawing wrong conclusions about the results of agreement analysis.

  5. Rock Statistics at the Mars Pathfinder Landing Site, Roughness and Roving on Mars

    NASA Technical Reports Server (NTRS)

    Haldemann, A. F. C.; Bridges, N. T.; Anderson, R. C.; Golombek, M. P.

    1999-01-01

    Several rock counts have been carried out at the Mars Pathfinder landing site producing consistent statistics of rock coverage and size-frequency distributions. These rock statistics provide a primary element of "ground truth" for anchoring remote sensing information used to pick the Pathfinder, and future, landing sites. The observed rock population statistics should also be consistent with the emplacement and alteration processes postulated to govern the landing site landscape. The rock population databases can however be used in ways that go beyond the calculation of cumulative number and cumulative area distributions versus rock diameter and height. Since the spatial parameters measured to characterize each rock are determined with stereo image pairs, the rock database serves as a subset of the full landing site digital terrain model (DTM). Insofar as a rock count can be carried out in a speedier, albeit coarser, manner than the full DTM analysis, rock counting offers several operational and scientific products in the near term. Quantitative rock mapping adds further information to the geomorphic study of the landing site, and can also be used for rover traverse planning. Statistical analysis of the surface roughness using the rock count proxy DTM is sufficiently accurate when compared to the full DTM to compare with radar remote sensing roughness measures, and with rover traverse profiles.

  6. Noise exposure-response relationships established from repeated binary observations: Modeling approaches and applications.

    PubMed

    Schäffer, Beat; Pieren, Reto; Mendolia, Franco; Basner, Mathias; Brink, Mark

    2017-05-01

    Noise exposure-response relationships are used to estimate the effects of noise on individuals or a population. Such relationships may be derived from independent or repeated binary observations, and modeled by different statistical methods. Depending on the method by which they were established, their application in population risk assessment or estimation of individual responses may yield different results, i.e., predict "weaker" or "stronger" effects. As far as the present body of literature on noise effect studies is concerned, however, the underlying statistical methodology to establish exposure-response relationships has not always been paid sufficient attention. This paper gives an overview on two statistical approaches (subject-specific and population-averaged logistic regression analysis) to establish noise exposure-response relationships from repeated binary observations, and their appropriate applications. The considerations are illustrated with data from three noise effect studies, estimating also the magnitude of differences in results when applying exposure-response relationships derived from the two statistical approaches. Depending on the underlying data set and the probability range of the binary variable it covers, the two approaches yield similar to very different results. The adequate choice of a specific statistical approach and its application in subsequent studies, both depending on the research question, are therefore crucial.

  7. A practical approach for the scale-up of roller compaction process.

    PubMed

    Shi, Weixian; Sprockel, Omar L

    2016-09-01

    An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  9. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  10. Velocity statistics of the Nagel-Schreckenberg model.

    PubMed

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  11. Design and Feasibility Assessment of a Retrospective Epidemiological Study of Coal-Fired Power Plant Emissions in the Pittsburgh Pennsylvania Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard A. Bilonick; Daniel Connell; Evelyn Talbott

    2006-12-20

    Eighty-nine (89) percent of the electricity supplied in the 35-county Pittsburgh region (comprising parts of the states of Pennsylvania, Ohio, West Virginia, and Maryland) is generated by coal-fired power plants making this an ideal region in which to study the effects of the fine airborne particulates designated as PM{sub 2.5} emitted by the combustion of coal. This report demonstrates that during the period from 1999-2006 (1) sufficient and extensive exposure data, in particular samples of speciated PM{sub 2.5} components from 1999 to 2003, and including gaseous co-pollutants and weather have been collected, (2) sufficient and extensive mortality, morbidity, and relatedmore » health outcomes data are readily available, and (3) the relationship between health effects and fine particulates can most likely be satisfactorily characterized using a combination of sophisticated statistical methodologies including latent variable modeling (LVM) and generalized linear autoregressive moving average (GLARMA) time series analysis. This report provides detailed information on the available exposure data and the available health outcomes data for the construction of a comprehensive database suitable for analysis, illustrates the application of various statistical methods to characterize the relationship between health effects and exposure, and provides a road map for conducting the proposed study. In addition, a detailed work plan for conducting the study is provided and includes a list of tasks and an estimated budget. A substantial portion of the total study cost is attributed to the cost of analyzing a large number of archived PM{sub 2.5} filters. Analysis of a representative sample of the filters supports the reliability of this invaluable but as-yet untapped resource. These filters hold the key to having sufficient data on the components of PM{sub 2.5} but have a limited shelf life. If the archived filters are not analyzed promptly the important and costly information they contain will be lost.« less

  12. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Statistical variances of diffusional properties from ab initio molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei

    2018-12-01

    Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.

  14. Management of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    MacKay, R. S.

    2018-02-01

    Complex dynamical systems are systems with many interdependent components which evolve in time. One might wish to control their trajectories, but a more practical alternative is to control just their statistical behaviour. In many contexts this would be both sufficient and a more realistic goal, e.g. climate and socio-economic systems. I refer to it as ‘management’ of complex dynamical systems. In this paper, some mathematics for management of complex dynamical systems is developed in the weakly dependent regime, and questions are posed for the strongly dependent regime.

  15. Distinguishing Internet-facing ICS Devices Using PLC Programming Information

    DTIC Science & Technology

    2014-06-19

    the course of this research. I would also like to thank the faculty and students at AFIT who helped me think through some of these problems. Finally, I...21 FTP 143 IMAP 1900 UPnP 6379 Redis 22 SSH 161 SNMP 2323 Telnet 7777 Oracle 23 Telnet 443 HTTPS 3306 MySQL 8000 Qconn 25 SMTP 445 SMB 3389 RDP 8080...250ms over the course of 10,000 samples provided sufficient data to test for statistically significant changes to ladder logic execution times with a

  16. Towards a Comprehensive Approach to the Analysis of Categorical Data.

    DTIC Science & Technology

    1983-06-01

    have made methodological contributions to the topic include R.L. Anderson, K. Hinkelman, 0. Kempthorne, and K. Koehler . The next section of this paper...their sufficient statistics that originated in the 1930’s with Fisher (e.g. see Dempster, 1971, and the discussion in Andersen. 1980 ). From the...ones from the original density for which h, h, .... h are MSS’s, and h ,h. h are the corresponding MSS’s in the conditional density (see Andersen, 1980

  17. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  18. Quantifying the added value of BNP in suspected heart failure in general practice: an individual patient data meta-analysis.

    PubMed

    Kelder, Johannes C; Cowie, Martin R; McDonagh, Theresa A; Hardman, Suzanna M C; Grobbee, Diederick E; Cost, Bernard; Hoes, Arno W

    2011-06-01

    Diagnosing early stages of heart failure with mild symptoms is difficult. B-type natriuretic peptide (BNP) has promising biochemical test characteristics, but its diagnostic yield on top of readily available diagnostic knowledge has not been sufficiently quantified in early stages of heart failure. To quantify the added diagnostic value of BNP for the diagnosis of heart failure in a population relevant to GPs and validate the findings in an independent primary care patient population. Individual patient data meta-analysis followed by external validation. The additional diagnostic yield of BNP above standard clinical information was compared with ECG and chest x-ray results. Derivation was performed on two existing datasets from Hillingdon (n=127) and Rotterdam (n=149) while the UK Natriuretic Peptide Study (n=306) served as validation dataset. Included were patients with suspected heart failure referred to a rapid-access diagnostic outpatient clinic. Case definition was according to the ESC guideline. Logistic regression was used to assess discrimination (with the c-statistic) and calibration. Of the 276 patients in the derivation set, 30.8% had heart failure. The clinical model (encompassing age, gender, known coronary artery disease, diabetes, orthopnoea, elevated jugular venous pressure, crackles, pitting oedema and S3 gallop) had a c-statistic of 0.79. Adding, respectively, chest x-ray results, ECG results or BNP to the clinical model increased the c-statistic to 0.84, 0.85 and 0.92. Neither ECG nor chest x-ray added significantly to the 'clinical plus BNP' model. All models had adequate calibration. The 'clinical plus BNP' diagnostic model performed well in an independent cohort with comparable inclusion criteria (c-statistic=0.91 and adequate calibration). Using separate cut-off values for 'ruling in' (typically implying referral for echocardiography) and for 'ruling out' heart failure--creating a grey zone--resulted in insufficient proportions of patients with a correct diagnosis. BNP has considerable diagnostic value in addition to signs and symptoms in patients suspected of heart failure in primary care. However, using BNP alone with the currently recommended cut-off levels is not sufficient to make a reliable diagnosis of heart failure.

  19. Acute, subchronic, and developmental toxicological properties of lubricating oil base stocks.

    PubMed

    Dalbey, Walden E; McKee, Richard H; Goyak, Katy Olsavsky; Biles, Robert W; Murray, Jay; White, Russell

    2014-01-01

    Lubricating oil base stocks (LOBs) are substances used in the manufacture of finished lubricants and greases. They are produced from residue remaining after atmospheric distillation of crude oil that is subsequently fractionated by vacuum distillation and additional refining steps. Initial LOB streams that have been produced by vacuum distillation but not further refined may contain polycyclic aromatic compounds (PACs) and may present carcinogenic hazards. In modern refineries, LOBs are further refined by multistep processes including solvent extraction and/or hydrogen treatment to reduce the levels of PACs and other undesirable constituents. Thus, mildly (insufficiently) refined LOBs are potentially more hazardous than more severely (sufficiently) refined LOBs. This article discusses the evaluation of LOBs using statistical models based on content of PACs; these models indicate that insufficiently refined LOBs (potentially carcinogenic LOBs) can also produce systemic and developmental effects with repeated dermal exposure. Experimental data were also obtained in ten 13-week dermal studies in rats, eight 4-week dermal studies in rabbits, and seven dermal developmental toxicity studies with sufficiently refined LOBs (noncarcinogenic and commonly marketed) in which no observed adverse effect levels for systemic toxicity and developmental toxicity were 1000 to 2000 mg/kg/d with dermal exposures, typically the highest dose tested. Results in both oral and inhalation developmental toxicity studies were similar. This absence of toxicologically relevant findings was consistent with lower PAC content of sufficiently refined LOBs. Based on data on reproductive organs with repeated dosing and parameters in developmental toxicity studies, sufficiently refined LOBs are likely to have little, if any, effect on reproductive parameters.

  20. “Plateau”-related summary statistics are uninformative for comparing working memory models

    PubMed Central

    van den Berg, Ronald; Ma, Wei Ji

    2014-01-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon. Zhang and Luck (2008) and Anderson, Vogel, and Awh (2011) noticed that as more items need to be remembered, “memory noise” seems to first increase and then reach a “stable plateau.” They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided, at most, 0.15% of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99% correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. At realistic numbers of trials, plateau-related summary statistics are completely unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (2011), we found that the evidence in the summary statistics was, at most, 0.12% of the evidence in the raw data and far too weak to warrant any conclusions. These findings call into question claims about working memory that are based on summary statistics. PMID:24719235

  1. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  2. Statistical testing of association between menstruation and migraine.

    PubMed

    Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G

    2015-02-01

    To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com

    Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less

  4. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  5. VISSR Atmospheric Sounder (VAS) simulation experiment for a severe storm environment

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Uccellini, L. W.; Mostek, A.

    1981-01-01

    Radiance fields were simulated for prethunderstorm environments in Oklahoma to demonstrate three points: (1) significant moisture gradients can be seen directly in images of the VISSIR Atmospheric Sounder (VAS) channels; (2) temperature and moisture profiles can be retrieved from VAS radiances with sufficient accuracy to be useful for mesoscale analysis of a severe storm environment; and (3) the quality of VAS mesoscale soundings improves with conditioning by local weather statistics. The results represent the optimum retrievability of mesoscale information from VAS radiance without the use of ancillary data. The simulations suggest that VAS data will yield the best soundings when a human being classifies the scene, picks relatively clear areas for retrieval, and applies a "local" statistical data base to resolve the ambiguities of satellite observations in favor of the most probable atmospheric structure.

  6. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  7. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  8. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  9. Issues in the Classification of Disease Instances with Ontologies

    PubMed Central

    Burgun, Anita; Bodenreider, Olivier; Jacquelinet, Christian

    2006-01-01

    Ontologies define classes of entities and their interrelations. They are used to organize data according to a theory of the domain. Towards that end, ontologies provide class definitions (i.e., the necessary and sufficient conditions for defining class membership). In medical ontologies, it is often difficult to establish such definitions for diseases. We use three examples (anemia, leukemia and schizophrenia) to illustrate the limitations of ontologies as classification resources. We show that eligibility criteria are often more useful than the Aristotelian definitions traditionally used in ontologies. Examples of eligibility criteria for diseases include complex predicates such as ‘ x is an instance of the class C when at least n criteria among m are verified’ and ‘symptoms must last at least one month if not treated, but less than one month, if effectively treated’. References to normality and abnormality are often found in disease definitions, but the operational definition of these references (i.e., the statistical and contextual information necessary to define them) is rarely provided. We conclude that knowledge bases that include probabilistic and statistical knowledge as well as rule-based criteria are more useful than Aristotelian definitions for representing the predicates defined by necessary and sufficient conditions. Rich knowledge bases are needed to clarify the relations between individuals and classes in various studies and applications. However, as ontologies represent relations among classes, they can play a supporting role in disease classification services built primarily on knowledge bases. PMID:16160339

  10. Dark matter as a trigger for periodic comet impacts.

    PubMed

    Randall, Lisa; Reece, Matthew

    2014-04-25

    Although statistical evidence is not overwhelming, possible support for an approximately 35×106  yr periodicity in the crater record on Earth could indicate a nonrandom underlying enhancement of meteorite impacts at regular intervals. A proposed explanation in terms of tidal effects on Oort cloud comet perturbations as the Solar System passes through the galactic midplane is hampered by lack of an underlying cause for sufficiently enhanced gravitational effects over a sufficiently short time interval and by the time frame between such possible enhancements. We show that a smooth dark disk in the galactic midplane would address both these issues and create a periodic enhancement of the sort that has potentially been observed. Such a disk is motivated by a novel dark matter component with dissipative cooling that we considered in earlier work. We show how to evaluate the statistical evidence for periodicity by input of appropriate measured priors from the galactic model, justifying or ruling out periodic cratering with more confidence than by evaluating the data without an underlying model. We find that, marginalizing over astrophysical uncertainties, the likelihood ratio for such a model relative to one with a constant cratering rate is 3.0, which moderately favors the dark disk model. Our analysis furthermore yields a posterior distribution that, based on current crater data, singles out a dark matter disk surface density of approximately 10M⊙/pc2. The geological record thereby motivates a particular model of dark matter that will be probed in the near future.

  11. Systematic and fully automated identification of protein sequence patterns.

    PubMed

    Hart, R K; Royyuru, A K; Stolovitzky, G; Califano, A

    2000-01-01

    We present an efficient algorithm to systematically and automatically identify patterns in protein sequence families. The procedure is based on the Splash deterministic pattern discovery algorithm and on a framework to assess the statistical significance of patterns. We demonstrate its application to the fully automated discovery of patterns in 974 PROSITE families (the complete subset of PROSITE families which are defined by patterns and contain DR records). Splash generates patterns with better specificity and undiminished sensitivity, or vice versa, in 28% of the families; identical statistics were obtained in 48% of the families, worse statistics in 15%, and mixed behavior in the remaining 9%. In about 75% of the cases, Splash patterns identify sequence sites that overlap more than 50% with the corresponding PROSITE pattern. The procedure is sufficiently rapid to enable its use for daily curation of existing motif and profile databases. Third, our results show that the statistical significance of discovered patterns correlates well with their biological significance. The trypsin subfamily of serine proteases is used to illustrate this method's ability to exhaustively discover all motifs in a family that are statistically and biologically significant. Finally, we discuss applications of sequence patterns to multiple sequence alignment and the training of more sensitive score-based motif models, akin to the procedure used by PSI-BLAST. All results are available at httpl//www.research.ibm.com/spat/.

  12. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  13. FNA, core biopsy, or both for the diagnosis of lung carcinoma: Obtaining sufficient tissue for a specific diagnosis and molecular testing.

    PubMed

    Coley, Shana M; Crapanzano, John P; Saqi, Anjali

    2015-05-01

    Increasingly, minimally invasive procedures are performed to assess lung lesions and stage lung carcinomas. In cases of advanced-stage lung cancer, the biopsy may provide the only diagnostic tissue. The aim of this study was to determine which method-fine-needle aspiration (FNA), core biopsy (CBx), or both (B)--is optimal for providing sufficient tissue for rendering a specific diagnosis and pursuing molecular studies for guiding tumor-specific treatment. A search was performed for computed tomography-guided lung FNA, CBx, or B cases with rapid onsite evaluation. Carcinomas were assessed for the adequacy to render a specific diagnosis; this was defined as enough refinement to subtype a primary carcinoma or to assess a metastatic origin morphologically and/or immunohistochemically. In cases of primary lung adenocarcinoma, the capability of each modality to yield sufficient tissue for molecular studies (epidermal growth factor receptor, KRAS, or anaplastic lymphoma kinase) was also assessed. There were 210 cases, and 134 represented neoplasms, including 115 carcinomas. For carcinomas, a specific diagnosis was reached in 89% of FNA cases (33 of 37), 98% of CBx cases (43 of 44), and 100% of B cases (34 of 34). For primary lung adenocarcinomas, adequate tissue remained to perform molecular studies in 94% of FNA cases (16 of 17), 100% of CBx cases (19 of 19), and 86% of B cases (19 of 22). No statistical difference was found among the modalities for either reaching a specific diagnosis (p = .07, Fisher exact test) or providing sufficient tissue for molecular studies (p = .30, Fisher exact test). The results suggest that FNA, CBx, and B are comparable for arriving at a specific diagnosis and having sufficient tissue for molecular studies: they specifically attained the diagnostic and prognostic goals of minimally invasive procedures for lung carcinoma. © 2015 American Cancer Society.

  14. Geoscience Education Research Methods: Thinking About Sample Size

    NASA Astrophysics Data System (ADS)

    Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research

    2011-12-01

    Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.

  15. The bag-of-frames approach: A not so sufficient model for urban soundscapes

    NASA Astrophysics Data System (ADS)

    Lagrange, Mathieu; Lafay, Grégoire; Défréville, Boris; Aucouturier, Jean-Julien

    2015-11-01

    The "bag-of-frames" approach (BOF), which encodes audio signals as the long-term statistical distribution of short-term spectral features, is commonly regarded as an effective and sufficient way to represent environmental sound recordings (soundscapes) since its introduction in an influential 2007 article. The present paper describes a concep-tual replication of this seminal article using several new soundscape datasets, with results strongly questioning the adequacy of the BOF approach for the task. We show that the good accuracy originally re-ported with BOF likely result from a particularly thankful dataset with low within-class variability, and that for more realistic datasets, BOF in fact does not perform significantly better than a mere one-point av-erage of the signal's features. Soundscape modeling, therefore, may not be the closed case it was once thought to be. Progress, we ar-gue, could lie in reconsidering the problem of considering individual acoustical events within each soundscape.

  16. Oxidative stress tolerance in intertidal red seaweed Hypnea musciformis (Wulfen) in relation to environmental components.

    PubMed

    Maharana, Dusmant; Das, Priya Brata; Verlecar, Xivanand N; Pise, Navnath M; Gauns, Manguesh

    2015-12-01

    Oxidative stress parameters in relation to temperature and other factors have been analysed in Hypnea musciformis, the red seaweed from Anjuna beach, Goa, with an aim to understand its susceptibility to the changing seasons. The results indicate that elevated temperature, sunshine and dessication during peak summer in May enhanced the activity of lipid peroxide, hydrogen peroxide and antioxidants such as catalase, glutathione and ascorbic acid. Statistical tests using multivariate analysis of variance and correlation analysis showed that oxidative stress and antioxidants maintain significant relation with temperature, salinity, sunshine and pH at an individual or interactive level. The dissolved nitrates, phosphates and biological oxygen demand in ambient waters and the trace metals in seaweeds maintained sufficiently low values to provide any indication that could exert contaminant oxidative stress responses. The present field studies suggest that elevated antioxidant content in H. musciformis offer sufficient relief to sustain against harsh environmental stresses for its colonization in the rocky intertidal zone.

  17. Assessment of Hygiene Habits in Acrylic Denture Wearers: a Cross-sectional Study

    PubMed Central

    Aoun, Georges; Gerges, Elie

    2017-01-01

    Objectives: To assess the denture hygiene habits in a population of Lebanese denture wearers. Materials and Methods: One hundred and thirty-two (132) patients [71 women (53.8%) and 61 men (46.2%)] wearing their acrylic dentures for more than two years were included in this study. The hygiene methods related to their dentures were evaluated and the data obtained were analyzed statistically using the IBM® SPSS® statistics 20.0 (USA) statistical package. Results: Regardless of the cleaning technique, the big majority of our participants [123 out of 132 (93.1%)] cleaned their dentures daily. The two mostly used denture cleaning techniques were rinsing with tap water (34.1%) and brushing with toothpaste (31.8%). Nearly half of our patients (45.5%) soaked their dentures during the night; most of them with cleansing tablets dissolved in water (28.8%). Conclusions: Within the limitations of our study, it was concluded that in a sample of Lebanese population surveyed about denture hygiene habits, the daily frequency of denture cleaning is satisfactory, but the techniques and products used were self-estimated and, consequently, not sufficient. PMID:29109670

  18. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  19. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  20. Statistical robustness of machine-learning estimates for characterizing a groundwater-surface water system, Southland, New Zealand

    NASA Astrophysics Data System (ADS)

    Friedel, M. J.; Daughney, C.

    2016-12-01

    The development of a successful surface-groundwater management strategy depends on the quality of data provided for analysis. This study evaluates the statistical robustness when using a modified self-organizing map (MSOM) technique to estimate missing values for three hypersurface models: synoptic groundwater-surface water hydrochemistry, time-series of groundwater-surface water hydrochemistry, and mixed-survey (combination of groundwater-surface water hydrochemistry and lithologies) hydrostratigraphic unit data. These models of increasing complexity are developed and validated based on observations from the Southland region of New Zealand. In each case, the estimation method is sufficiently robust to cope with groundwater-surface water hydrochemistry vagaries due to sample size and extreme data insufficiency, even when >80% of the data are missing. The estimation of surface water hydrochemistry time series values enabled the evaluation of seasonal variation, and the imputation of lithologies facilitated the evaluation of hydrostratigraphic controls on groundwater-surface water interaction. The robust statistical results for groundwater-surface water models of increasing data complexity provide justification to apply the MSOM technique in other regions of New Zealand and abroad.

  1. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children?

    PubMed Central

    Erfanparast, Leila; Sheykhgermchi, Sanaz; Ghanizadeh, Milad

    2017-01-01

    Introduction Pain control is one of the most important aspects of behavior management in children. The most common way to achieve pain control is by using local anesthetics (LA). Many studies describe that the buccal nerve innervates the buccal gingiva and mucosa of the mandible for a variable extent from the vicinity of the lower third molar to the lower canine. Regarding the importance of appropriate and complete LA in child-behavior control, in this study, we examined the frequency of buccal gingiva anesthesia of primary mandibular molars and canine after inferior alveolar nerve block injection in 4- to 6-year-old children. Study design In this descriptive cross-sectional study, 220 4- to 6-year-old children were randomly selected and entered into the study. Inferior alveolar nerve block was injected with the same method and standards for all children, and after ensuring the success of block injection, anesthesia of buccal mucosa of primary molars and canine was examined by stick test and reaction of child using sound, eye, motor (SEM) scale. The data from the study were analyzed using descriptive statistics and statistical software Statistical Package for the Social Sciences (SPSS) version 21. Results The area that was the highest nonanesthetized was recorded as in the distobuccal of the second primary molars. The area of the lowest nonanesthesia was also reported in the gingiva of primary canine tooth. Conclusion According to this study, in 15 to 30% of cases, after inferior alveolar nerve block injection, the primary mandibular molars’ buccal mucosa is not anesthetized. How to cite this article: Pourkazemi M, Erfanparast L, Sheykhgermchi S, Ghanizadeh M. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children? Int J Clin Pediatr Dent 2017;10(4):369-372. PMID:29403231

  2. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    PubMed Central

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural equation model and, therefore, contributed indirectly and negatively to performance. Furthermore, it had a direct negative impact on performance (probably via increased tension and worry in the exam). The results of the study speak for shared but also unique components of statistics anxiety and mathematics anxiety. They are also important for instruction and give recommendations to learners as well as to instructors. PMID:28790938

  3. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

    PubMed

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural equation model and, therefore, contributed indirectly and negatively to performance. Furthermore, it had a direct negative impact on performance (probably via increased tension and worry in the exam). The results of the study speak for shared but also unique components of statistics anxiety and mathematics anxiety. They are also important for instruction and give recommendations to learners as well as to instructors.

  4. Hydrogen scrambling in ethane induced by intense laser fields: statistical analysis of coincidence events.

    PubMed

    Kanya, Reika; Kudou, Tatsuya; Schirmel, Nora; Miura, Shun; Weitzel, Karl-Michael; Hoshina, Kennosuke; Yamanouchi, Kaoru

    2012-05-28

    Two-body Coulomb explosion processes of ethane (CH(3)CH(3)) and its isotopomers (CD(3)CD(3) and CH(3)CD(3)) induced by an intense laser field (800 nm, 1.0 × 10(14) W/cm(2)) with three different pulse durations (40 fs, 80 fs, and 120 fs) are investigated by a coincidence momentum imaging method. On the basis of statistical treatment of the coincidence data, the contributions from false coincidence events are estimated and the relative yields of the decomposition pathways are determined with sufficiently small uncertainties. The branching ratios of the two body decomposition pathways of CH(3)CD(3) from which triatomic hydrogen molecular ions (H(3)(+), H(2)D(+), HD(2)(+), D(3)(+)) are ejected show that protons and deuterons within CH(3)CD(3) are scrambled almost statistically prior to the ejection of a triatomic hydrogen molecular ion. The branching ratios were estimated by statistical Rice-Ramsperger-Kassel-Marcus calculations by assuming a transition state with a hindered-rotation of a diatomic hydrogen moiety. The hydrogen scrambling dynamics followed by the two body decomposition processes are discussed also by using the anisotropies in the ejection directions of the fragment ions and the kinetic energy distribution of the two body decomposition pathways.

  5. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  6. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  7. Changes of statistical structural fluctuations unveils an early compacted degraded stage of PNS myelin

    NASA Astrophysics Data System (ADS)

    Poccia, Nicola; Campi, Gaetano; Ricci, Alessandro; Caporale, Alessandra S.; di Cola, Emanuela; Hawkins, Thomas A.; Bianconi, Antonio

    2014-06-01

    Degradation of the myelin sheath is a common pathology underlying demyelinating neurological diseases from Multiple Sclerosis to Leukodistrophies. Although large malformations of myelin ultrastructure in the advanced stages of Wallerian degradation is known, its subtle structural variations at early stages of demyelination remains poorly characterized. This is partly due to the lack of suitable and non-invasive experimental probes possessing sufficient resolution to detect the degradation. Here we report the feasibility of the application of an innovative non-invasive local structure experimental approach for imaging the changes of statistical structural fluctuations in the first stage of myelin degeneration. Scanning micro X-ray diffraction, using advances in synchrotron x-ray beam focusing, fast data collection, paired with spatial statistical analysis, has been used to unveil temporal changes in the myelin structure of dissected nerves following extraction of the Xenopus laevis sciatic nerve. The early myelin degeneration is a specific ordered compacted phase preceding the swollen myelin phase of Wallerian degradation. Our demonstration of the feasibility of the statistical analysis of SµXRD measurements using biological tissue paves the way for further structural investigations of degradation and death of neurons and other cells and tissues in diverse pathological states where nanoscale structural changes may be uncovered.

  8. Statistics and Informatics in Space Astrophysics

    NASA Astrophysics Data System (ADS)

    Feigelson, E.

    2017-12-01

    The interest in statistical and computational methodology has seen rapid growth in space-based astrophysics, parallel to the growth seen in Earth remote sensing. There is widespread agreement that scientific interpretation of the cosmic microwave background, discovery of exoplanets, and classifying multiwavelength surveys is too complex to be accomplished with traditional techniques. NASA operates several well-functioning Science Archive Research Centers providing 0.5 PBy datasets to the research community. These databases are integrated with full-text journal articles in the NASA Astrophysics Data System (200K pageviews/day). Data products use interoperable formats and protocols established by the International Virtual Observatory Alliance. NASA supercomputers also support complex astrophysical models of systems such as accretion disks and planet formation. Academic researcher interest in methodology has significantly grown in areas such as Bayesian inference and machine learning, and statistical research is underway to treat problems such as irregularly spaced time series and astrophysical model uncertainties. Several scholarly societies have created interest groups in astrostatistics and astroinformatics. Improvements are needed on several fronts. Community education in advanced methodology is not sufficiently rapid to meet the research needs. Statistical procedures within NASA science analysis software are sometimes not optimal, and pipeline development may not use modern software engineering techniques. NASA offers few grant opportunities supporting research in astroinformatics and astrostatistics.

  9. Phenomenological constraints on the bulk viscosity of QCD

    NASA Astrophysics Data System (ADS)

    Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Jeon, Sangyong; Gale, Charles

    2017-11-01

    While small at very high temperature, the bulk viscosity of Quantum Chromodynamics is expected to grow in the confinement region. Although its precise magnitude and temperature-dependence in the cross-over region is not fully understood, recent theoretical and phenomenological studies provided evidence that the bulk viscosity can be sufficiently large to have measurable consequences on the evolution of the quark-gluon plasma. In this work, a Bayesian statistical analysis is used to establish probabilistic constraints on the temperature-dependence of bulk viscosity using hadronic measurements from RHIC and LHC.

  10. Assessment of change in dynamic psychotherapy.

    PubMed

    Høglend, P; Bøgwald, K P; Amlo, S; Heyerdahl, O; Sørbye, O; Marble, A; Sjaastad, M C; Bentsen, H

    2000-01-01

    Five scales have been developed to assess changes that are consistent with the therapeutic rationales and procedures of dynamic psychotherapy. Seven raters evaluated 50 patients before and 36 patients again after brief dynamic psychotherapy. A factor analysis indicated that the scales represent a dimension that is discriminable from general symptoms. A summary measure, Dynamic Capacity, was rated with acceptable reliability by a single rater. However, average scores of three raters were needed for good reliability of change ratings. The scales seem to be sufficiently fine-grained to capture statistically and clinically significant changes during brief dynamic psychotherapy.

  11. Psychological stress measurement through voice output analysis

    NASA Technical Reports Server (NTRS)

    Older, H. J.; Jenney, L. L.

    1975-01-01

    Audio tape recordings of selected Skylab communications were processed by a psychological stress evaluator. Strip chart tracings were read blind and scores were assigned based on characteristics reported by the manufacturer to indicate psychological stress. These scores were analyzed for their empirical relationships with operational variables in Skylab judged to represent varying degrees of situational stress. Although some statistically significant relationships were found, the technique was not judged to be sufficiently predictive to warrant its use in assessing the degree of psychological stress of crew members in future space missions.

  12. Natural gas odor level testing: Instruments and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, E.H.

    1995-12-01

    An odor in natural and LP gases is necessary. The statistics are overwhelming; when gas customers can smell a leak before the percentage of gas in air reaches a combustible mixture, the chances of an accident are greatly reduced. How do gas companies determine if there is sufficient odor reaching every gas customers home? Injection equipment is important. The rate and quality of odorant is important. Nevertheless, precision odorization alone does not guarantee that customers` homes always have gas with a readily detectable odor. To secure that goal, odor monitoring instruments are necessary.

  13. The Exponentially Embedded Family of Distributions for Effective Data Representation, Information Extraction, and Decision Making

    DTIC Science & Technology

    2013-03-01

    information ex- traction and learning from data. First of all, it admits sufficient statistics and therefore, provides the means for selecting good models...readily found since the Kullback -Liebler divergence can be used to ascertain distances between PDFs for various hypothesis testing scenarios. We...t1, t2) Information content of T2 (x) is D(pryj,IJ2(tl, t2)11Pryj,!J2=0(tl, t2)) = reduction in distance to true PDF where D(p1llp2) is Kullback

  14. PIV measurements in near-wake turbulent regions

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Cheng; Chang, Keh-Chin

    2018-05-01

    Particle image velocimetry (PIV) is a non-intrusive optical diagnostic and must be made of the seedings (foreign particles) instead of the fluid itself. However, reliable PIV measurement of turbulence requires sufficient numbers of seeding falling in each interrogation window of image. A gray-level criterion is developed in this work to check the attainment of statistically stationary status of turbulent flow properties. It is suggested that the gray level of no less than 0.66 is used as the threshold for reliable PIV measurements in the present near-wake turbulent regions.

  15. Quantum communication complexity advantage implies violation of a Bell inequality

    PubMed Central

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-01-01

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600

  16. Coliphages as indicators of enteroviruses.

    PubMed Central

    Stetler, R E

    1984-01-01

    Coliphages were monitored in conjunction with indicator bacteria and enteroviruses in a drinking-water plant modified to reduce trihalomethane production. Coliphages could be detected in the source water by direct inoculation, and sufficient coliphages were detected in enterovirus concentrates to permit following the coliphage levels through different water treatment processes. The recovery efficiency by different filter types ranged from 1 to 53%. Statistical analysis of the data indicated that enterovirus isolates were better correlated with coliphages than with total coliforms, fecal coliforms, fecal streptococci, or standard plate count organisms. Coliphages were not detected in finished water. PMID:6093694

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkley, Eric D.; Sego, Landon H.; Lin, Andy

    Adaptive processes in bacterial species can occur rapidly in laboratory culture, leading to genetic divergence between naturally occurring and laboratory-adapted strains. Differentiating wild and closely-related laboratory strains is clearly important for biodefense and bioforensics; however, DNA sequence data alone has thus far not provided a clear signature, perhaps due to lack of understanding of how diverse genome changes lead to adapted phenotypes. Protein abundance profiles from mass spectrometry-based proteomics analyses are a molecular measure of phenotype. Proteomics data contains sufficient information that powerful statistical methods can uncover signatures that distinguish wild strains of Yersinia pestis from laboratory-adapted strains.

  18. Detection of non-Gaussian fluctuations in a quantum point contact.

    PubMed

    Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M

    2008-07-04

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  19. Detection of Non-Gaussian Fluctuations in a Quantum Point Contact

    NASA Astrophysics Data System (ADS)

    Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.

    2008-07-01

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  20. A short note on jackknifing the concordance correlation coefficient.

    PubMed

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2014-02-10

    Lin's concordance correlation coefficient (CCC) is a very popular scaled index of agreement used in applied statistics. To obtain a confidence interval (CI) for the estimate of CCC, jackknifing was proposed and shown to perform well in simulation as well as in applications. However, a theoretical proof of the validity of the jackknife CI for the CCC has not been presented yet. In this note, we establish a sufficient condition for using the jackknife method to construct the CI for the CCC. Copyright © 2013 John Wiley & Sons, Ltd.

  1. A quantum heuristic algorithm for the traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Lee, Changhyoup; Yoo, Seokwon; Lim, James; Lee, Jinhyoung

    2012-12-01

    We propose a quantum heuristic algorithm to solve the traveling salesman problem by generalizing the Grover search. Sufficient conditions are derived to greatly enhance the probability of finding the tours with the cheapest costs reaching almost to unity. These conditions are characterized by the statistical properties of tour costs and are shown to be automatically satisfied in the large-number limit of cities. In particular for a continuous distribution of the tours along the cost, we show that the quantum heuristic algorithm exhibits a quadratic speedup compared to its classical heuristic algorithm.

  2. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  3. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  4. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  5. Implications of clinical trial design on sample size requirements.

    PubMed

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  6. Sensitivity of precipitation statistics to urban growth in a subtropical coastal megacity cluster.

    PubMed

    Holst, Christopher Claus; Chan, Johnny C L; Tam, Chi-Yung

    2017-09-01

    This short paper presents an investigation on how human activities may or may not affect precipitation based on numerical simulations of precipitation in a benchmark case with modified lower boundary conditions, representing different stages of urban development in the model. The results indicate that certain degrees of urbanization affect the likelihood of heavy precipitation significantly, while less urbanized or smaller cities are much less prone to these effects. Such a result can be explained based on our previous work where the sensitivity of precipitation statistics to surface anthropogenic heat sources lies in the generation of buoyancy and turbulence in the planetary boundary layer and dissipation through triggering of convection. Thus only mega cities of sufficient size, and hence human-activity-related anthropogenic heat emission, can expect to experience such effects. In other words, as cities grow, their effects upon precipitation appear to grow as well. Copyright © 2017. Published by Elsevier B.V.

  7. Standardised method of determining vibratory perception thresholds for diagnosis and screening in neurological investigation.

    PubMed Central

    Goldberg, J M; Lindblom, U

    1979-01-01

    Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379

  8. Semantic Coherence Facilitates Distributional Learning.

    PubMed

    Ouyang, Long; Boroditsky, Lera; Frank, Michael C

    2017-04-01

    Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.

  9. Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2012-02-01

    The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.

  10. Criterion validity of the Wechsler Intelligence Scale for Children-Fourth Edition after pediatric traumatic brain injury.

    PubMed

    Donders, Jacobus; Janke, Kelly

    2008-07-01

    The performance of 40 children with complicated mild to severe traumatic brain injury on the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003) was compared with that of 40 demographically matched healthy controls. Of the four WISC-IV factor index scores, only Processing Speed yielded a statistically significant group difference (p < .001) as well as a statistically significant negative correlation with length of coma (p < .01). Logistic regression, using Processing Speed to classify individual children, yielded a sensitivity of 72.50% and a specificity of 62.50%, with false positive and false negative rates both exceeding 30%. We conclude that Processing Speed has acceptable criterion validity in the evaluation of children with complicated mild to severe traumatic brain injury but that the WISC-IV should be supplemented with other measures to assure sufficient accuracy in the diagnostic process.

  11. Implications of Satellite Swath Width on Global Aerosol Optical Thickness Statistics

    NASA Technical Reports Server (NTRS)

    Colarco, Peter; Kahn, Ralph; Remer, Lorraine; Levy, Robert; Welton, Ellsworth

    2012-01-01

    We assess the impact of swath width on the statistics of aerosol optical thickness (AOT) retrieved by satellite as inferred from observations made by the Moderate Resolution Imaging Spectroradiometer (MODIS). We sub-sample the year 2009 MODIS data from both the Terra and Aqua spacecraft along several candidate swaths of various widths. We find that due to spatial sampling there is an uncertainty of approximately 0.01 in the global, annual mean AOT. The sub-sampled monthly mean gridded AOT are within +/- 0.01 of the full swath AOT about 20% of the time for the narrow swath sub-samples, about 30% of the time for the moderate width sub-samples, and about 45% of the time for the widest swath considered. These results suggest that future aerosol satellite missions with only a narrow swath view may not sample the true AOT distribution sufficiently to reduce significantly the uncertainty in aerosol direct forcing of climate.

  12. Using public control genotype data to increase power and decrease cost of case-control genetic association studies.

    PubMed

    Ho, Lindsey A; Lange, Ethan M

    2010-12-01

    Genome-wide association (GWA) studies are a powerful approach for identifying novel genetic risk factors associated with human disease. A GWA study typically requires the inclusion of thousands of samples to have sufficient statistical power to detect single nucleotide polymorphisms that are associated with only modest increases in risk of disease given the heavy burden of a multiple test correction that is necessary to maintain valid statistical tests. Low statistical power and the high financial cost of performing a GWA study remains prohibitive for many scientific investigators anxious to perform such a study using their own samples. A number of remedies have been suggested to increase statistical power and decrease cost, including the utilization of free publicly available genotype data and multi-stage genotyping designs. Herein, we compare the statistical power and relative costs of alternative association study designs that use cases and screened controls to study designs that are based only on, or additionally include, free public control genotype data. We describe a novel replication-based two-stage study design, which uses free public control genotype data in the first stage and follow-up genotype data on case-matched controls in the second stage that preserves many of the advantages inherent when using only an epidemiologically matched set of controls. Specifically, we show that our proposed two-stage design can substantially increase statistical power and decrease cost of performing a GWA study while controlling the type-I error rate that can be inflated when using public controls due to differences in ancestry and batch genotype effects.

  13. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay

    2016-10-01

    Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.

  14. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    PubMed

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  15. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  16. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    PubMed Central

    Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-01-01

    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695

  17. Are the urology operating room personnel aware about the ionizing radiation?

    PubMed Central

    Tok, Adem; Akbas, Alparslan; Aytan, Nimet; Aliskan, Tamer; Cicekbilek, Izzet; Kaba, Mehmet; Tepeler, Abdulkadir

    2015-01-01

    ABSTRACT Purpose: We assessed and evaluated attitudes and knowledge regarding ionizing radiation of urology surgery room staff. Materials and Methods: A questionnaire was sent by e-mail to urology surgery room personnel in Turkey, between June and August 2013. The questionnaire included demographic questions and questions regarding radiation exposure and protection. Results: In total, 127 questionnaires were answered. Of them, 62 (48.8%) were nurses, 51 (40.2%) were other personnel, and 14 (11%) were radiological technicians. In total, 113 (89%) participants had some knowledge of radiation, but only 56 (44.1%) had received specific education or training regarding the harmful effects of radiation. In total, 92 (72.4%) participants indicated that they used a lead apron and a thyroid shield. In the subgroup that had received education about the harmful effects of radiation, the use ratio for all protective procedures was 21.4% (n=12); this ratio was only 2.8% (n=2) for those with no specific training; the difference was statistically significant (p=0.004). Regarding dosimeters, the use rates were 100% for radiology technicians, 46.8% for nurses, and 31.4% for other hospital personnel; these differences were statistically significant (p<0.001). No significant relationship between working period in the surgery room, number of daily fluoroscopy procedures, education, task, and use of radiation protection measures was found. Conclusions: It is clear that operating room-allied health personnel exposed to radiation do not have sufficient knowledge of ionizing radiation and they do not take sufficient protective measures. PMID:26689525

  18. Classification of HCV and HIV-1 Sequences with the Branching Index

    PubMed Central

    Hraber, Peter; Kuiken, Carla; Waugh, Mark; Geer, Shaun; Bruno, William J.; Leitner, Thomas

    2009-01-01

    SUMMARY Classification of viral sequences should be fast, objective, accurate, and reproducible. Most methods that classify sequences use either pairwise distances or phylogenetic relations, but cannot discern when a sequence is unclassifiable. The branching index (BI) combines distance and phylogeny methods to compute a ratio that quantifies how closely a query sequence clusters with a subtype clade. In the hypothesis-testing framework of statistical inference, the BI is compared with a threshold to test whether sufficient evidence exists for the query sequence to be classified among known sequences. If above the threshold, the null hypothesis of no support for the subtype relation is rejected and the sequence is taken as belonging to the subtype clade with which it clusters on the tree. This study evaluates statistical properties of the branching index for subtype classification in HCV and HIV-1. Pairs of BI values with known positive and negative test results were computed from 10,000 random fragments of reference alignments. Sampled fragments were of sufficient length to contain phylogenetic signal that groups reference sequences together properly into subtype clades. For HCV, a threshold BI of 0.71 yields 95.1% agreement with reference subtypes, with equal false positive and false negative rates. For HIV-1, a threshold of 0.66 yields 93.5% agreement. Higher thresholds can be used where lower false positive rates are required. In synthetic recombinants, regions without breakpoints are recognized accurately; regions with breakpoints do not uniquely represent any known subtype. Web-based services for viral subtype classification with the branching index are available online. PMID:18753218

  19. Voids and constraints on nonlinear clustering of galaxies

    NASA Technical Reports Server (NTRS)

    Vogeley, Michael S.; Geller, Margaret J.; Park, Changbom; Huchra, John P.

    1994-01-01

    Void statistics of the galaxy distribution in the Center for Astrophysics Redshift Survey provide strong constraints on galaxy clustering in the nonlinear regime, i.e., on scales R equal to or less than 10/h Mpc. Computation of high-order moments of the galaxy distribution requires a sample that (1) densely traces the large-scale structure and (2) covers sufficient volume to obtain good statistics. The CfA redshift survey densely samples structure on scales equal to or less than 10/h Mpc and has sufficient depth and angular coverage to approach a fair sample on these scales. In the nonlinear regime, the void probability function (VPF) for CfA samples exhibits apparent agreement with hierarchical scaling (such scaling implies that the N-point correlation functions for N greater than 2 depend only on pairwise products of the two-point function xi(r)) However, simulations of cosmological models show that this scaling in redshift space does not necessarily imply such scaling in real space, even in the nonlinear regime; peculiar velocities cause distortions which can yield erroneous agreement with hierarchical scaling. The underdensity probability measures the frequency of 'voids' with density rho less than 0.2 -/rho. This statistic reveals a paucity of very bright galaxies (L greater than L asterisk) in the 'voids.' Underdensities are equal to or greater than 2 sigma more frequent in bright galaxy samples than in samples that include fainter galaxies. Comparison of void statistics of CfA samples with simulations of a range of cosmological models favors models with Gaussian primordial fluctuations and Cold Dark Matter (CDM)-like initial power spectra. Biased models tend to produce voids that are too empty. We also compare these data with three specific models of the Cold Dark Matter cosmogony: an unbiased, open universe CDM model (omega = 0.4, h = 0.5) provides a good match to the VPF of the CfA samples. Biasing of the galaxy distribution in the 'standard' CDM model (omega = 1, b = 1.5; see below for definitions) and nonzero cosmological constant CDM model (omega = 0.4, h = 0.6 lambda(sub 0) = 0.6, b = 1.3) produce voids that are too empty. All three simulations match the observed VPF and underdensity probability for samples of very bright (M less than M asterisk = -19.2) galaxies, but produce voids that are too empty when compared with samples that include fainter galaxies.

  20. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.

  1. Association of SIRT-1 Gene Polymorphism and Vitamin D Level in Egyptian Patients With Rheumatoid Arthritis.

    PubMed

    Sabry, Dina; Kaddafy, Shereen Rashad; Abdelaziz, Ahmed Ali; Nassar, Abdelfattah Kasem; Rayan, Mohamed Moneer; Sadek, Sadek Mostafa; Abou-Elalla, Amany A

    2018-03-01

    We investigated SIRT-1 genetic variant and its association with vitamin D level in Egyptian patients with rheumatoid arthritis (RA). Seventy Egyptian subjects were enrolled in our study and divided into two groups: RA group (n = 50 patients) and healthy control group (n = 20 subjects). Five milliliter blood sample was withdrawn from each subject followed by laboratory investigation and DNA extraction for SIRT-1 gene polymorphism assessment (rs7895833 A>G, rs7069102 C>G and rs2273773 C>T) and vitamin D level expression. There was statistically significant difference between rheumatoid cases and controls with regard to vitamin D level with 88% of cases showing insufficient vitamin D versus all controls showing sufficient level. SIRT-1 different SNPs rs2273773, rs7895833and rs7069102 genotype frequencies were statistically significant in RA compared to control group (P = 0.001). There was no statistically significant difference between different genotypes of rs2273773, rs7895833 and rs7069102 with regard to vitamin D level. We concluded that there is a strong association between SIRT-1 polymorphism genotyping and RA. Vitamin D level was insufficient in Egyptian patients with RA.

  2. Association of SIRT-1 Gene Polymorphism and Vitamin D Level in Egyptian Patients With Rheumatoid Arthritis

    PubMed Central

    Sabry, Dina; Kaddafy, Shereen Rashad; Abdelaziz, Ahmed Ali; Nassar, Abdelfattah Kasem; Rayan, Mohamed Moneer; Sadek, Sadek Mostafa; Abou-Elalla, Amany A

    2018-01-01

    Background We investigated SIRT-1 genetic variant and its association with vitamin D level in Egyptian patients with rheumatoid arthritis (RA). Methods Seventy Egyptian subjects were enrolled in our study and divided into two groups: RA group (n = 50 patients) and healthy control group (n = 20 subjects). Five milliliter blood sample was withdrawn from each subject followed by laboratory investigation and DNA extraction for SIRT-1 gene polymorphism assessment (rs7895833 A>G, rs7069102 C>G and rs2273773 C>T) and vitamin D level expression. Results There was statistically significant difference between rheumatoid cases and controls with regard to vitamin D level with 88% of cases showing insufficient vitamin D versus all controls showing sufficient level. SIRT-1 different SNPs rs2273773, rs7895833and rs7069102 genotype frequencies were statistically significant in RA compared to control group (P = 0.001). There was no statistically significant difference between different genotypes of rs2273773, rs7895833 and rs7069102 with regard to vitamin D level. Conclusion We concluded that there is a strong association between SIRT-1 polymorphism genotyping and RA. Vitamin D level was insufficient in Egyptian patients with RA. PMID:29416576

  3. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    DOE PAGES

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less

  4. Discovery sequence and the nature of low permeability gas accumulations

    USGS Publications Warehouse

    Attanasi, E.D.

    2005-01-01

    There is an ongoing discussion regarding the geologic nature of accumulations that host gas in low-permeability sandstone environments. This note examines the discovery sequence of the accumulations in low permeability sandstone plays that were classified as continuous-type by the U.S. Geological Survey for the 1995 National Oil and Gas Assessment. It compares the statistical character of historical discovery sequences of accumulations associated with continuous-type sandstone gas plays to those of conventional plays. The seven sandstone plays with sufficient data exhibit declining size with sequence order, on average, and in three of the seven the trend is statistically significant. Simulation experiments show that both a skewed endowment size distribution and a discovery process that mimics sampling proportional to size are necessary to generate a discovery sequence that consistently produces a statistically significant negative size order relationship. The empirical findings suggest that discovery sequence could be used to constrain assessed gas in untested areas. The plays examined represent 134 of the 265 trillion cubic feet of recoverable gas assessed in undeveloped areas of continuous-type gas plays in low permeability sandstone environments reported in the 1995 National Assessment. ?? 2005 International Association for Mathematical Geology.

  5. Confirmation of ovarian homogeneity in post-vitellogenic cultured white sturgeon, Acipenser transmontanus.

    PubMed

    Talbott, Mariah J; Servid, Sarah A; Cavinato, Anna G; Van Eenennaam, Joel P; Doroshov, Serge I; Struffenegger, Peter; Webb, Molly A H

    2014-02-01

    Assessing stage of oocyte maturity in female sturgeon by calculating oocyte polarization index (PI) is a necessary tool for both conservation propagation managers and caviar producers to know when to hormonally induce spawning. We tested the assumption that sampling ovarian follicles from one section of one ovary is sufficient for calculating an oocyte PI representative of oocyte maturity for an individual animal. Short-wavelength near-infrared spectroscopy (SW-NIR) scans were performed on three positions per ovary for five fish prior to caviar harvest. Samples of ovarian follicles were subsequently taken from the exact location of the SW-NIR scans for calculation of oocyte PI and follicle diameter. Oocyte PI was statistically different though not biologically relevant within an ovary and between ovaries in four of five fish. Follicle diameter was statistically different but not biologically relevant within an ovary in three of five fish. There were no differences in follicle diameter between ovaries. No statistical differences were observed between SW-NIR spectra collected at different locations within an ovary or between ovaries. These results emphasize the importance of utilizing both oocyte PI measurement and progesterone-induced oocyte maturation assays while deciding when to hormonally induce spawning in sturgeon females.

  6. Decay pattern of the Pygmy Dipole Resonance in 130Te

    NASA Astrophysics Data System (ADS)

    Isaak, J.; Beller, J.; Fiori, E.; Krtička, M.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Scheck, M.; Silva, J.; Sonnabend, K.; Tonchev, A.; Tornow, W.; Weller, H.; Zweidinger, M.

    2014-03-01

    The electric dipole strength distribution in 130Te has been investigated using the method of Nuclear Resonance Fluorescence. The experiments were performed at the Darmstadt High Intensity Photon Setup using bremsstrahlung as photon source and at the High Intensity overrightarrow γ -Ray Source, where quasi-monochromatic and polarized photon beams are provided. Average decay properties of 130Te below the neutron separation energy are determined. Comparing the experimental data to the predictions of the statistical model indicate, that nuclear structure effects play an important role even at sufficiently high excitation energies. Preliminary results will be presented.

  7. Assessment of Change in Dynamic Psychotherapy

    PubMed Central

    Høglend, Per; Bøgwald, Kjell-Petter; Amlo, Svein; Heyerdahl, Oscar; Sørbye, Øystein; Marble, Alice; Sjaastad, Mary Cosgrove; Bentsen, Håvard

    2000-01-01

    Five scales have been developed to assess changes that are consistent with the therapeutic rationales and procedures of dynamic psychotherapy. Seven raters evaluated 50 patients before and 36 patients again after brief dynamic psychotherapy. A factor analysis indicated that the scales represent a dimension that is discriminable from general symptoms. A summary measure, Dynamic Capacity, was rated with acceptable reliability by a single rater. However, average scores of three raters were needed for good reliability of change ratings. The scales seem to be sufficiently fine-grained to capture statistically and clinically significant changes during brief dynamic psychotherapy. PMID:11069131

  8. Breast Reference Set Application: Chris Li-FHCRC (2014) — EDRN Public Portal

    Cancer.gov

    This application proposes to use Reference Set #1. We request access to serum samples collected at the time of breast biopsy from subjects with IC (n=30) or benign disease without atypia (n=30). Statistical power: With 30 BC cases and 30 normal controls, a 25% difference in mean metabolite levels can be detected between groups with 80% power and α=0.05, assuming coefficients of variation of 30%, consistent with our past studies. These sample sizes appear sufficient to enable detection of changes similar in magnitude to those previously reported in pre-clinical (BC recurrence) specimens (20).

  9. Nationwide forestry applications program. Ten-Ecosystem Study (TES) site 8, Grays Harbor County, Washington

    NASA Technical Reports Server (NTRS)

    Prill, J. C. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Level 2 forest features (softwood, hardwood, clear-cut, and water) can be classified with an overall accuracy of 71.6 percent plus or minus 6.7 percent at the 90 percent confidence level for the particular data and conditions existing at the time of the study. Signatures derived from training fields taken from only 10 percent of the site are not sufficient to adequately classify the site. The level 3 softwood age group classification appears reasonable, although no statistical evaluation was performed.

  10. Investigation of several aspects of LANDSAT-4 data quality. [Sacramento, San Francisco, and NE Arkansas

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1984-01-01

    The Thematic Mapper scene of Sacramento, CA acquired during the TDRSS test was received in TIPS format. Quadrants for both scenes were tested for band-to-band registration using reimplemented block correlation techniques. Summary statistics for band-to-band registrations of TM band combinations for Quadrant 4 of the NE Arkansas scene in TIPS format are tabulated as well as those for Quadrant 1 of the Sacramento scene. The system MTF analysis for the San Francisco scene is completed. The thermal band did not have sufficient contrast for the targets used and was not analyzed.

  11. Water-quality assessment of the Trinity River Basin, Texas - Analysis of available information on nutrients and suspended sediment, 1974-91

    USGS Publications Warehouse

    Van Metre, Peter C.; Reutter, David C.

    1995-01-01

    Only limited suspended-sediment data were available. Four sites had daily sediment-discharge records for three or more water years (October 1 to September 30) between 1974 and 1985. An additional three sites had periodic measurements of suspended-sediment concentrations. There are differences in concentrations and yields among sites; however, the limited amount of data precludes developing statistical or cause-and-effect relations with environmental factors such as land use, soil, and geology. Data are sufficient, and the relation is pronounced enough, to indicate trapping of suspended sediment by Livingston Reservoir.

  12. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  13. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  14. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  15. Data analysis of gravitational-wave signals from spinning neutron stars. III. Detection statistics and computational requirements

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Królak, Andrzej

    2000-03-01

    We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.

  16. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  17. Statistical significance of trace evidence matches using independent physicochemical measurements

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George

    1997-02-01

    A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.

  18. Coding “What” and “When” in the Archer Fish Retina

    PubMed Central

    Vasserman, Genadiy; Shamir, Maoz; Ben Simon, Avi; Segev, Ronen

    2010-01-01

    Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy. PMID:21079682

  19. Static versus dynamic sampling for data mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John, G.H.; Langley, P.

    1996-12-31

    As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less

  20. Parallel cascade selection molecular dynamics (PaCS-MD) to generate conformational transition pathway

    NASA Astrophysics Data System (ADS)

    Harada, Ryuhei; Kitao, Akio

    2013-07-01

    Parallel Cascade Selection Molecular Dynamics (PaCS-MD) is proposed as a molecular simulation method to generate conformational transition pathway under the condition that a set of "reactant" and "product" structures is known a priori. In PaCS-MD, the cycle of short multiple independent molecular dynamics simulations and selection of the structures close to the product structure for the next cycle are repeated until the simulated structures move sufficiently close to the product. Folding of 10-residue mini-protein chignolin from the extended to native structures and open-close conformational transition of T4 lysozyme were investigated by PaCS-MD. In both cases, tens of cycles of 100-ps MD were sufficient to reach the product structures, indicating the efficient generation of conformational transition pathway in PaCS-MD with a series of conventional MD without additional external biases. Using the snapshots along the pathway as the initial coordinates, free energy landscapes were calculated by the combination with multiple independent umbrella samplings to statistically elucidate the conformational transition pathways.

  1. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  2. Factor Configurations with Governance as Conditions for Low HIV/AIDS Prevalence in HIV/AIDS Recipient Countries: Fuzzy-set Analysis

    PubMed Central

    Lee, Hwa-Young; Kang, Minah

    2015-01-01

    This paper aims to investigate whether good governance of a recipient country is a necessary condition and what combinations of factors including governance factor are sufficient for low prevalence of HIV/AIDS in HIV/AIDS aid recipient countries during the period of 2002-2010. For this, Fuzzy-set Qualitative Comparative Analysis (QCA) was used. Nine potential attributes for a causal configuration for low HIV/AIDS prevalence were identified through a review of previous studies. For each factor, full membership, full non-membership, and crossover point were specified using both author's knowledge and statistical information of the variables. Calibration and conversion to a fuzzy-set score were conducted using Fs/QCA 2.0 and probabilistic tests for necessary and sufficiency were performed by STATA 11. The result suggested that governance is the necessary condition for low prevalence of HIV/AIDS in a recipient country. From sufficiency test, two pathways were resulted. The low level of governance can lead to low level of HIV/AIDS prevalence when it is combined with other favorable factors, especially, low economic inequality, high economic development and high health expenditure. However, strengthening governance is a more practical measure to keep low prevalence of HIV/AIDS because it is hard to achieve both economic development and economic quality. This study highlights that a comprehensive policy measure is the key for achieving low prevalence of HIV/AIDS in recipient country. PMID:26617451

  3. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. MRI of the small bowel: can sufficient bowel distension be achieved with small volumes of oral contrast?

    PubMed

    Kinner, Sonja; Kuehle, Christiane A; Herbig, Sebastian; Haag, Sebastian; Ladd, Susanne C; Barkhausen, Joerg; Lauenstein, Thomas C

    2008-11-01

    Sufficient luminal distension is mandatory for small bowel imaging. However, patients often are unable to ingest volumes of currently applied oral contrast compounds. The aim of this study was to evaluate if administration of low doses of an oral contrast agent with high-osmolarity leads to sufficient and diagnostic bowel distension. Six healthy volunteers ingested at different occasions 150, 300 and 450 ml of a commercially available oral contrast agent (Banana Smoothie Readi-Cat, E-Z-EM; 194 mOsmol/l). Two-dimensional TrueFISP data sets were acquired in 5-min intervals up to 45 min after contrast ingestion. Small bowel distension was quantified using a visual five-grade ranking (5 = very good distension, 1 = collapsed bowel). Results were statistically compared using a Wilcoxon-Rank test. Ingestion of 450 ml and 300 ml resulted in a significantly better distension than 150 ml. The all-over average distension value for 450 ml amounted to 3.4 (300 ml: 3.0, 150 ml: 2.3) and diagnostic bowel distension could be found throughout the small intestine. Even 45 min after ingestion of 450 ml the jejunum and ileum could be reliably analyzed. Small bowel imaging with low doses of contrast leads to diagnostic distension values in healthy subjects when a high-osmolarity substance is applied. These findings may help to further refine small bowel MRI techniques, but need to be confirmed in patients with small bowel disorders.

  5. Factor Configurations with Governance as Conditions for Low HIV/AIDS Prevalence in HIV/AIDS Recipient Countries: Fuzzy-set Analysis.

    PubMed

    Lee, Hwa-Young; Yang, Bong-Min; Kang, Minah

    2015-11-01

    This paper aims to investigate whether good governance of a recipient country is a necessary condition and what combinations of factors including governance factor are sufficient for low prevalence of HIV/AIDS in HIV/AIDS aid recipient countries during the period of 2002-2010. For this, Fuzzy-set Qualitative Comparative Analysis (QCA) was used. Nine potential attributes for a causal configuration for low HIV/AIDS prevalence were identified through a review of previous studies. For each factor, full membership, full non-membership, and crossover point were specified using both author's knowledge and statistical information of the variables. Calibration and conversion to a fuzzy-set score were conducted using Fs/QCA 2.0 and probabilistic tests for necessary and sufficiency were performed by STATA 11. The result suggested that governance is the necessary condition for low prevalence of HIV/AIDS in a recipient country. From sufficiency test, two pathways were resulted. The low level of governance can lead to low level of HIV/AIDS prevalence when it is combined with other favorable factors, especially, low economic inequality, high economic development and high health expenditure. However, strengthening governance is a more practical measure to keep low prevalence of HIV/AIDS because it is hard to achieve both economic development and economic quality. This study highlights that a comprehensive policy measure is the key for achieving low prevalence of HIV/AIDS in recipient country.

  6. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  7. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  8. What is too much variation? The null hypothesis in small-area analysis.

    PubMed Central

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-01-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results. PMID:2312306

  9. What is too much variation? The null hypothesis in small-area analysis.

    PubMed

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-02-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results.

  10. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  11. With directed study before a 4-day operating room management course, trust in the content did not change progressively during the classroom time.

    PubMed

    Dexter, Franklin; Epstein, Richard H; Fahy, Brenda G; Van Swol, Lyn M

    2017-11-01

    A 4-day course in operating room (OR) management is sufficient to provide anesthesiologists with the knowledge and problem solving skills needed to participate in projects of the systems-based-practice competency. Anesthesiologists may need to learn fewer topics when the objective is, instead, limited to comprehension of decision-making on the day of surgery, We tested the hypothesis that trust in course content would not increase further after completion of topics related to OR decision-making on the day of surgery. Panel survey. A 4-day 35hour course in OR management. Mandatory assignments before classes were: 1) review of statistics at a level slightly less than required of anesthesiology residents by the American Board of Anesthesiology; and 2) reading of peer-reviewed published articles while learning the scientific vocabulary. N=31 course participants who each attended 1 of 4 identical courses. At the end of each of the 4days, course participants completed a 9-item scale assessing trust in the course content, namely, its quality, usefulness, and reliability. Cronbach alpha for the 1 to 7 trust scale was 0.94. The means±SD of scores were 5.86±0.80 after day #1, 5.81±0.76 after day #2, 5.80±0.77 after day #3, and 5.97±0.76 after day #4. Multiple methods of statistical analysis all found that there was no significant effect of the number of days of the course on trust in the content (all P≥0.30). Trust in the course content did not increase after the end of the 1st day. Therefore, statistics review, reading, and the 1st day of the course appear sufficient when the objective of teaching OR management is not that participants will learn how to make the decisions, but will comprehend them and trust in the information underlying knowledgeable decision-making. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Intensity correlation-based calibration of FRET.

    PubMed

    Bene, László; Ungvári, Tamás; Fedor, Roland; Sasi Szabó, László; Damjanovich, László

    2013-11-05

    Dual-laser flow cytometric resonance energy transfer (FCET) is a statistically efficient and accurate way of determining proximity relationships for molecules of cells even under living conditions. In the framework of this algorithm, absolute fluorescence resonance energy transfer (FRET) efficiency is determined by the simultaneous measurement of donor-quenching and sensitized emission. A crucial point is the determination of the scaling factor α responsible for balancing the different sensitivities of the donor and acceptor signal channels. The determination of α is not simple, requiring preparation of special samples that are generally different from a double-labeled FRET sample, or by the use of sophisticated statistical estimation (least-squares) procedures. We present an alternative, free-from-spectral-constants approach for the determination of α and the absolute FRET efficiency, by an extension of the presented framework of the FCET algorithm with an analysis of the second moments (variances and covariances) of the detected intensity distributions. A quadratic equation for α is formulated with the intensity fluctuations, which is proved sufficiently robust to give accurate α-values on a cell-by-cell basis in a wide system of conditions using the same double-labeled sample from which the FRET efficiency itself is determined. This seemingly new approach is illustrated by FRET measurements between epitopes of the MHCI receptor on the cell surface of two cell lines, FT and LS174T. The figures show that whereas the common way of α determination fails at large dye-per-protein labeling ratios of mAbs, this presented-as-new approach has sufficient ability to give accurate results. Although introduced in a flow cytometer, the new approach can also be straightforwardly used with fluorescence microscopes. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Spectral performance of Square Kilometre Array Antennas - II. Calibration performance

    NASA Astrophysics Data System (ADS)

    Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.

    2017-09-01

    We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.

  14. Intraosseous anesthesia with solution injection controlled by a computerized system versus conventional oral anesthesia: a preliminary study.

    PubMed

    Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier; Peñarrocha, María

    2012-05-01

    To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia--the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia--the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia.

  15. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  16. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  17. Direct and indirect comparison meta-analysis of levetiracetam versus phenytoin or valproate for convulsive status epilepticus.

    PubMed

    Brigo, Francesco; Bragazzi, Nicola; Nardone, Raffaele; Trinka, Eugen

    2016-11-01

    The aim of this study was to conduct a meta-analysis of published studies to directly compare intravenous (IV) levetiracetam (LEV) with IV phenytoin (PHT) or IV valproate (VPA) as second-line treatment of status epilepticus (SE), to indirectly compare intravenous IV LEV with IV VPA using common reference-based indirect comparison meta-analysis, and to verify whether results of indirect comparisons are consistent with results of head-to-head randomized controlled trials (RCTs) directly comparing IV LEV with IV VPA. Random-effects Mantel-Haenszel meta-analyses to obtain odds ratios (ORs) for efficacy and safety of LEV versus VPA and LEV or VPA versus PHT were used. Adjusted indirect comparisons between LEV and VPA were used. Two RCTs comparing LEV with PHT (144 episodes of SE) and 3 RCTs comparing VPA with PHT (227 episodes of SE) were included. Direct comparisons showed no difference in clinical seizure cessation, neither between VPA and PHT (OR: 1.07; 95% CI: 0.57 to 2.03) nor between LEV and PHT (OR: 1.18; 95% CI: 0.50 to 2.79). Indirect comparisons showed no difference between LEV and VPA for clinical seizure cessation (OR: 1.16; 95% CI: 0.45 to 2.97). Results of indirect comparisons are consistent with results of a recent RCT directly comparing LEV with VPA. The absence of a statistically significant difference in direct and indirect comparisons is due to the lack of sufficient statistical power to detect a difference. Conducting a RCT that has not enough people to detect a clinically important difference or to estimate an effect with sufficient precision can be regarded a waste of time and resources and may raise several ethical concerns, especially in RCT on SE. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Prefrontal left--dominant hemisphere--gamma and delta oscillators in general anaesthesia with volatile anaesthetics during open thoracic surgery.

    PubMed

    Saniova, Beata; Drobny, Michal; Drobna, Eva; Hamzik, Julian; Bakosova, Erika; Fischer, Martin

    2016-01-01

    The main objective was to indicate sufficient general anaesthesia (GA) inhibition for negative experience rejection in GA. We investigated the group of patients (n = 17, mean age 63.59 years, 9 male--65.78 years, 8 female - 61.13 years) during GA in open thorax surgery and analyzed EEG signal by power spectrum (pEEG) delta (DR), and gamma rhythms (GR). EEG was performed: OPO - the day before surgery and in surgery phases OP1-OP5 during GA. Particular GA phases: OP1 = after pre- medication, OP2 = surgery onset, OP3 = surgery with one-side lung ventilation, OP4 = end of surgery, both sides ventilation, OP5 = end of GA. pEEG registering in the left frontal region Fp1-A1 montage in 17 right handed persons. Mean DR power in OP2 phase is significantly higher than in phase OP5 and mean DR power in OP3 is higher than in OP5. One-lung ventilation did not change minimal alveolar concentration and gases should not accelerate decrease in mean DR power. Higher mean value of GR power in OPO than in OP3 was statistically significant. Mean GR power in OP3 is statistically significantly lower than in OP4 correlating with the same gases concentration in OP3 and OP4. Our results showed DR power decreased since OP2 till the end of GA it means inhibition represented by power DR fluently decreasing is sufficient for GA depth. GR power decay near the working memory could reduce conscious cognition and unpleasant explicit experience in GA.

  19. Assessment of left ventricular function in healthy Great Danes and in Great Danes with dilated cardiomyopathy using speckle tracking echocardiography.

    PubMed

    Pedro, B; Stephenson, H; Linney, C; Cripps, P; Dukes-McEwan, J

    2017-08-01

    Assess global circumferential and radial systolic and diastolic myocardial function with speckle tracking echocardiography (STE) in healthy Great Danes (GD) and in GD diagnosed with dilated cardiomyopathy (DCM). Eighty-nine GD were included in the study: 39 healthy (normal group [NORMg]) and 50 diagnosed with DCM (DCMg). This was a retrospective study. Signalment and echocardiographic diagnosis were obtained from the medical records of GD assessed between 2008 and 2012. Speckle tracking echocardiography analysis of circumferential (C) and radial (R) strain (St) and strain rate (SR) in systole (S), early (E) and late (A) diastole was performed at the levels of the mitral valve (MV), papillary muscles (PM) and apex (Ap) of the left ventricle. Univariable and multivariable analysis was performed to identify differences between groups. Speckle tracking echocardiography variables increase from the MV towards the Ap of the left ventricle in both NORMg and DCMg dogs, some reaching statistical significance. Most of the variables (28/31) were lower in DCMg than in NORMg dogs: statistically significant variables included radial SR at the Ap in systole (p=0.029), radial strain at the PM (p=0.012), circumferential SR at the PM in systole (p=0.031), circumferential and radial SR at the MV in early diastole (p=0.019 and p=0.049, respectively). There are significant differences in STE variables between NORMg and DCMg Great Danes, although the overlap between the two groups may indicate that these variables are not sufficiently discriminatory. STE variables are not sufficiently sensitive to use in isolation as a screening method. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. From plastic to gold: a unified classification scheme for reference standards in medical image processing

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.

    2002-05-01

    Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.

  1. Intraosseous anesthesia with solution injection controlled by a computerized system versus conventional oral anesthesia: A preliminary study

    PubMed Central

    Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier

    2012-01-01

    Objective: To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. Design: A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. Results: The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia – the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia – the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. Conclusions: The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia. Key words:Anesthesia, intraosseous, oral anesthesia, infiltrating, mandibular block, Quicksleeper®. PMID:22143722

  2. Modeling Traffic on the Web Graph

    NASA Astrophysics Data System (ADS)

    Meiss, Mark R.; Gonçalves, Bruno; Ramasco, José J.; Flammini, Alessandro; Menczer, Filippo

    Analysis of aggregate and individual Web requests shows that PageRank is a poor predictor of traffic. We use empirical data to characterize properties of Web traffic not reproduced by Markovian models, including both aggregate statistics such as page and link traffic, and individual statistics such as entropy and session size. As no current model reconciles all of these observations, we present an agent-based model that explains them through realistic browsing behaviors: (1) revisiting bookmarked pages; (2) backtracking; and (3) seeking out novel pages of topical interest. The resulting model can reproduce the behaviors we observe in empirical data, especially heterogeneous session lengths, reconciling the narrowly focused browsing patterns of individual users with the extreme variance in aggregate traffic measurements. We can thereby identify a few salient features that are necessary and sufficient to interpret Web traffic data. Beyond the descriptive and explanatory power of our model, these results may lead to improvements in Web applications such as search and crawling.

  3. Inference in the brain: Statistics flowing in redundant population codes

    PubMed Central

    Pitkow, Xaq; Angelaki, Dora E

    2017-01-01

    It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050

  4. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery.

    PubMed

    Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M

    2018-01-01

    Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.

  5. The bias of the log power spectrum for discrete surveys

    NASA Astrophysics Data System (ADS)

    Repp, Andrew; Szapudi, István

    2018-03-01

    A primary goal of galaxy surveys is to tighten constraints on cosmological parameters, and the power spectrum P(k) is the standard means of doing so. However, at translinear scales P(k) is blind to much of these surveys' information - information which the log density power spectrum recovers. For discrete fields (such as the galaxy density), A* denotes the statistic analogous to the log density: A* is a `sufficient statistic' in that its power spectrum (and mean) capture virtually all of a discrete survey's information. However, the power spectrum of A* is biased with respect to the corresponding log spectrum for continuous fields, and to use P_{A^*}(k) to constrain the values of cosmological parameters, we require some means of predicting this bias. Here, we present a prescription for doing so; for Euclid-like surveys (with cubical cells 16h-1 Mpc across) our bias prescription's error is less than 3 per cent. This prediction will facilitate optimal utilization of the information in future galaxy surveys.

  6. Upside/Downside statistical mechanics of nonequilibrium Brownian motion. I. Distributions, moments, and correlation functions of a free particle.

    PubMed

    Craven, Galen T; Nitzan, Abraham

    2018-01-28

    Statistical properties of Brownian motion that arise by analyzing, separately, trajectories over which the system energy increases (upside) or decreases (downside) with respect to a threshold energy level are derived. This selective analysis is applied to examine transport properties of a nonequilibrium Brownian process that is coupled to multiple thermal sources characterized by different temperatures. Distributions, moments, and correlation functions of a free particle that occur during upside and downside events are investigated for energy activation and energy relaxation processes and also for positive and negative energy fluctuations from the average energy. The presented results are sufficiently general and can be applied without modification to the standard Brownian motion. This article focuses on the mathematical basis of this selective analysis. In subsequent articles in this series, we apply this general formalism to processes in which heat transfer between thermal reservoirs is mediated by activated rate processes that take place in a system bridging them.

  7. The Dysexecutive Questionnaire advanced: item and test score characteristics, 4-factor solution, and severity classification.

    PubMed

    Bodenburg, Sebastian; Dopslaff, Nina

    2008-01-01

    The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.

  8. Fermi-Dirac statistics and traffic in complex networks.

    PubMed

    de Moura, Alessandro P S

    2005-06-01

    We propose an idealized model for traffic in a network, in which many particles move randomly from node to node, following the network's links, and it is assumed that at most one particle can occupy any given node. This is intended to mimic the finite forwarding capacity of nodes in communication networks, thereby allowing the possibility of congestion and jamming phenomena. We show that the particles behave like free fermions, with appropriately defined energy-level structure and temperature. The statistical properties of this system are thus given by the corresponding Fermi-Dirac distribution. We use this to obtain analytical expressions for dynamical quantities of interest, such as the mean occupation of each node and the transport efficiency, for different network topologies and particle densities. We show that the subnetwork of free nodes always fragments into small isolated clusters for a sufficiently large number of particles, implying a communication breakdown at some density for all network topologies. These results are compared to direct simulations.

  9. Upside/Downside statistical mechanics of nonequilibrium Brownian motion. I. Distributions, moments, and correlation functions of a free particle

    NASA Astrophysics Data System (ADS)

    Craven, Galen T.; Nitzan, Abraham

    2018-01-01

    Statistical properties of Brownian motion that arise by analyzing, separately, trajectories over which the system energy increases (upside) or decreases (downside) with respect to a threshold energy level are derived. This selective analysis is applied to examine transport properties of a nonequilibrium Brownian process that is coupled to multiple thermal sources characterized by different temperatures. Distributions, moments, and correlation functions of a free particle that occur during upside and downside events are investigated for energy activation and energy relaxation processes and also for positive and negative energy fluctuations from the average energy. The presented results are sufficiently general and can be applied without modification to the standard Brownian motion. This article focuses on the mathematical basis of this selective analysis. In subsequent articles in this series, we apply this general formalism to processes in which heat transfer between thermal reservoirs is mediated by activated rate processes that take place in a system bridging them.

  10. The Effect of Clothing on the Rate of Decomposition and Diptera Colonization on Sus scrofa Carcasses.

    PubMed

    Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal

    2015-07-01

    Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427  = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations. © 2015 American Academy of Forensic Sciences.

  11. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  12. Indicator organisms in meat and poultry slaughter operations: their potential use in process control and the role of emerging technologies.

    PubMed

    Saini, Parmesh K; Marks, Harry M; Dreyfuss, Moshe S; Evans, Peter; Cook, L Victor; Dessai, Uday

    2011-08-01

    Measuring commonly occurring, nonpathogenic organisms on poultry products may be used for designing statistical process control systems that could result in reductions of pathogen levels. The extent of pathogen level reduction that could be obtained from actions resulting from monitoring these measurements over time depends upon the degree of understanding cause-effect relationships between processing variables, selected output variables, and pathogens. For such measurements to be effective for controlling or improving processing to some capability level within the statistical process control context, sufficiently frequent measurements would be needed to help identify processing deficiencies. Ultimately the correct balance of sampling and resources is determined by those characteristics of deficient processing that are important to identify. We recommend strategies that emphasize flexibility, depending upon sampling objectives. Coupling the measurement of levels of indicator organisms with practical emerging technologies and suitable on-site platforms that decrease the time between sample collections and interpreting results would enhance monitoring process control.

  13. Phase Space Dissimilarity Measures for Structural Health Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bubacz, Jacob A; Chmielewski, Hana T; Pape, Alexander E

    A novel method for structural health monitoring (SHM), known as the Phase Space Dissimilarity Measures (PSDM) approach, is proposed and developed. The patented PSDM approach has already been developed and demonstrated for a variety of equipment and biomedical applications. Here, we investigate SHM of bridges via analysis of time serial accelerometer measurements. This work has four aspects. The first is algorithm scalability, which was found to scale linearly from one processing core to four cores. Second, the same data are analyzed to determine how the use of the PSDM approach affects sensor placement. We found that a relatively low-density placementmore » sufficiently captures the dynamics of the structure. Third, the same data are analyzed by unique combinations of accelerometer axes (vertical, longitudinal, and lateral with respect to the bridge) to determine how the choice of axes affects the analysis. The vertical axis is found to provide satisfactory SHM data. Fourth, statistical methods were investigated to validate the PSDM approach for this application, yielding statistically significant results.« less

  14. Data on xylem sap proteins from Mn- and Fe-deficient tomato plants obtained using shotgun proteomics.

    PubMed

    Ceballos-Laita, Laura; Gutierrez-Carbonell, Elain; Takahashi, Daisuke; Abadía, Anunciación; Uemura, Matsuo; Abadía, Javier; López-Millán, Ana Flor

    2018-04-01

    This article contains consolidated proteomic data obtained from xylem sap collected from tomato plants grown in Fe- and Mn-sufficient control, as well as Fe-deficient and Mn-deficient conditions. Data presented here cover proteins identified and quantified by shotgun proteomics and Progenesis LC-MS analyses: proteins identified with at least two peptides and showing changes statistically significant (ANOVA; p ≤ 0.05) and above a biologically relevant selected threshold (fold ≥ 2) between treatments are listed. The comparison between Fe-deficient, Mn-deficient and control xylem sap samples using a multivariate statistical data analysis (Principal Component Analysis, PCA) is also included. Data included in this article are discussed in depth in the research article entitled "Effects of Fe and Mn deficiencies on the protein profiles of tomato ( Solanum lycopersicum) xylem sap as revealed by shotgun analyses" [1]. This dataset is made available to support the cited study as well to extend analyses at a later stage.

  15. A statistical model to estimate refractivity turbulence structure constant C sub n sup 2 in the free atmosphere

    NASA Technical Reports Server (NTRS)

    Warnock, J. M.; Vanzandt, T. E.

    1986-01-01

    A computer program has been tested and documented (Warnock and VanZandt, 1985) that estimates mean values of the refractivity turbulence structure constant in the stable free atmosphere from standard National Weather Service balloon data or an equivalent data set. The program is based on the statistical model for the occurrence of turbulence developed by VanZandt et al. (1981). Height profiles of the estimated refractivity turbulence structure constant agree well with profiles measured by the Sunset radar with a height resolution of about 1 km. The program also estimates the energy dissipation rate (epsilon), but because of the lack of suitable observations of epsilon, the model for epsilon has not yet been evaluated sufficiently to be used in routine applications. Vertical profiles of the refractivity turbulence structure constant were compared with profiles measured by both radar and optical remote sensors and good agreement was found. However, at times the scintillometer measurements were less than both the radar and model values.

  16. Improved Statistics for Determining the Patterson Symmetry fromUnmerged Diffraction Intensities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    We examine procedures for detecting the point-group symmetryof macromolecular datasets and propose enhancements. To validate apoint-group, it is sufficient to compare pairs of Bragg reflections thatare related by each of the group's component symmetry operators.Correlation is commonly expressed in the form of a single statisticalquantity (such as Rmerge) that incorporates information from all of theobserved reflections. However, the usual practice of weighting all pairsof symmetry-related intensities equally can obscure the fact that thevarious symmetry operators of the point-group contribute differingfractions of the total set. In some cases where particular symmetryelements are significantly under-represented, statistics calculatedglobally over all observations do notmore » permit conclusions about thepoint-group and Patterson symmetry. The problem can be avoided byrepartitioning the data in a way that explicitly takes note of individualoperators. The new analysis methods, incorporated into the programLABELIT (cci.lbl.gov/labelit), can be performed early enough during dataacquisition, and are quick enough, that it is feasible to pause tooptimize the data collection strategy.« less

  17. Statistical properties of Galactic CMB foregrounds: dust and synchrotron

    NASA Astrophysics Data System (ADS)

    Kandel, D.; Lazarian, A.; Pogosyan, D.

    2018-07-01

    Recent Planck observations have revealed some of the important statistical properties of synchrotron and dust polarization, namely, the B to E mode power and temperature-E (TE) mode cross-correlation. In this paper, we extend our analysis in Kandel et al. that studied the B to E mode power ratio for polarized dust emission to include TE cross-correlation and develop an analogous formalism for synchrotron signal, all using a realistic model of magnetohydrodynamical turbulence. Our results suggest that the Planck results for both synchrotron and dust polarization can be understood if the turbulence in the Galaxy is sufficiently sub-Alfvénic. Making use of the observed poor magnetic field-density correlation, we show that the observed positive TE correlation for dust corresponds to our theoretical expectations. We also show how the B to E ratio as well as the TE cross-correlation can be used to study media magnetization, compressibility, and level of density-magnetic field correlation.

  18. Tunnel ionization of highly excited atoms in a noncoherent laser radiation field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krainov, V.P.; Todirashku, S.S.

    1982-10-01

    A theory is developed of the ionization of highly excited atomic states by a low-frequency field of noncoherent laser radiation with a large number of modes. Analytic formulas are obtained for the probability of the tunnel ionization in such a field. An analysis is made of the case of the hydrogen atom when the parabolic quantum numbers are sufficiently good in the low-frequency limit, as well as of the case of highly excited states of complex atoms when these states are characterized by a definite orbital momentum and parity. It is concluded that the statistical factor representing the ratio ofmore » the probability in a stochastic field to the probability in a monochromatic field decreases, compared with the case of a short-range potential, if the ''Coulomb tail'' is included. It is shown that at a given field intensity the statistical factor decreases on increase in the principal quantum number of the state being ionized.« less

  19. Statistical description of the motion of dislocation kinks in a random field of impurities adsorbed by a dislocation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petukhov, B. V., E-mail: petukhov@ns.crys.ras.r

    2010-01-15

    A model has been proposed for describing the influence of impurities adsorbed by dislocation cores on the mobility of dislocation kinks in materials with a high crystalline relief (Peierls barriers). The delay time spectrum of kinks at statistical fluctuations of the impurity density has been calculated for a sufficiently high energy of interaction between impurities and dislocations when the migration potential is not reduced to a random Gaussian potential. It has been shown that fluctuations in the impurity distribution substantially change the character of the migration of dislocation kinks due to the slow decrease in the probability of long delaymore » times. The dependences of the position of the boundary of the dynamic phase transition to a sublinear drift of kinks x {proportional_to} t{sup {delta}} ({delta} {sigma} 1) and the characteristics of the anomalous mobility on the physical parameters (stress, impurity concentration, experimental temperature, etc.) have been calculated.« less

  20. Prediagnostic Plasma 25-Hydroxyvitamin D and Pancreatic Cancer Survival

    PubMed Central

    Yuan, Chen; Qian, Zhi Rong; Babic, Ana; Morales-Oyarvide, Vicente; Rubinson, Douglas A.; Kraft, Peter; Ng, Kimmie; Bao, Ying; Giovannucci, Edward L.; Ogino, Shuji; Stampfer, Meir J.; Gaziano, John Michael; Sesso, Howard D.; Buring, Julie E.; Cochrane, Barbara B.; Chlebowski, Rowan T.; Snetselaar, Linda G.; Manson, JoAnn E.; Fuchs, Charles S.

    2016-01-01

    Purpose Although vitamin D inhibits pancreatic cancer proliferation in laboratory models, the association of plasma 25-hydroxyvitamin D [25(OH)D] with patient survival is largely unexplored. Patients and Methods We analyzed survival among 493 patients from five prospective US cohorts who were diagnosed with pancreatic cancer from 1984 to 2008. We estimated hazard ratios (HRs) for death by plasma level of 25(OH)D (insufficient, < 20 ng/mL; relative insufficiency, 20 to < 30 ng/mL; sufficient ≥ 30 ng/mL) by using Cox proportional hazards regression models adjusted for age, cohort, race and ethnicity, smoking, diagnosis year, stage, and blood collection month. We also evaluated 30 tagging single-nucleotide polymorphisms in the vitamin D receptor gene, requiring P < .002 (0.05 divided by 30 genotyped variants) for statistical significance. Results Mean prediagnostic plasma level of 25(OH)D was 24.6 ng/mL, and 165 patients (33%) were vitamin D insufficient. Compared with patients with insufficient levels, multivariable-adjusted HRs for death were 0.79 (95% CI, 0.48 to 1.29) for patients with relative insufficiency and 0.66 (95% CI, 0.49 to 0.90) for patients with sufficient levels (P trend = .01). These results were unchanged after further adjustment for body mass index and history of diabetes (P trend = .02). The association was strongest among patients with blood collected within 5 years of diagnosis, with an HR of 0.58 (95% CI, 0.35 to 0.98) comparing patients with sufficient to patients with insufficient 25(OH)D levels. No single-nucleotide polymorphism at the vitamin D receptor gene met our corrected significance threshold of P < .002; rs7299460 was most strongly associated with survival (HR per minor allele, 0.80; 95% CI, 0.68 to 0.95; P = .01). Conclusion We observed longer overall survival in patients with pancreatic cancer who had sufficient prediagnostic plasma levels of 25(OH)D. PMID:27325858

  1. [Analysis on sleep duration of 6-12 years old school children in school-day in 8 provinces, China].

    PubMed

    Shi, Wenhui; Zhai, Yi; Li, Weirong; Shen, Chong; Shi, Xiaoming

    2015-05-01

    To analyze the influencing factors for sleep duration of school children aged 6-12 years in school-day in 8 provinces in China. The cross sectional study was conducted among 20,603 children aged 6-12 years and selected through stratified random cluster sampling in 8 provinces (municipality and autonomous region) with different geographic characteristics and economic development level in China from September to November, 2010 to understand their sleep duration in school-day and related habits. t test and χ2 test were used to compare the sleep duration of the children. Multivariate stepwise logistic regression analysis was conducted to identify the influencing factors. The survey indicated that the daily average sleep duration of the children in school days was 9.11 hours. The proportions of the children with serious insufficient sleep, insufficient sleep and sufficient sleep were 32.82% (7,672/20,603), 39.70% (8,179/20,603) and 27.48% (5,662/20,603), the children's sleep duration declined with age, so did proportion of children with serious insufficient sleep. There were no sex, urban or rural area and household income level specific significant differences in sleep duration among the children surveyed, and there were no sex specific differences in the proportion of children with serious insufficient sleep, insufficient sleep and sufficient sleep, however, these proportions were statistically different between urban area and rural area and among the regions with different economic level. The proportions of children with serious insufficient sleep and sufficient sleep was higher in rural area than in urban area (χ2=59.96, χ2=45.47, P<0.05), while the proportion of children with insufficient sleep was lower in rural area than in urban area. In the economy developed region, the proportion of children with insufficient sleep was lowest, the difference was statistical significant. After adjusting for sex, weight, diet and exercise time, multivariate logistic regression analysis showed that the factors benefiting children to have 10 hours sleep every day included having high protein diet, exercise, high household economic status and living in urban area. The problem of school children having insufficient sleep was serious in China, especially in the rural area.

  2. Comparison of the Physical Activity and Sedentary Behaviour Assessment Questionnaire and the Short-Form International Physical Activity Questionnaire: An Analysis of Health Survey for England Data

    PubMed Central

    Scholes, Shaun; Bridges, Sally; Ng Fat, Linda; Mindell, Jennifer S.

    2016-01-01

    Background The Physical Activity and Sedentary Behaviour Assessment Questionnaire (PASBAQ), used within the Health Survey for England (HSE) at 5-yearly intervals, is not included annually due to funding and interview-length constraints. Policy-makers and data-users are keen to consider shorter instruments such as the Short-form International Physical Activity Questionnaire (IPAQ) for the annual survey. Both questionnaires were administered in HSE 2012, enabling comparative assessment in a random sample of 1252 adults. Methods Relative agreement using prevalence-adjusted bias-adjusted Kappa (PABAK) statistics was estimated for: sufficient aerobic activity (moderate-to-vigorous physical activity [MVPA] ≥150minutes/week); inactivity (MVPA<30minutes/week); and excessive sitting (≥540minutes/weekday). Cross-sectional associations with health outcomes were compared across tertiles of MVPA and tertiles of sitting time using logistic regression with tests for linear trend. Results Compared with PASBAQ data, IPAQ-assessed estimates of sufficient aerobic activity and inactivity were higher and lower, respectively; estimates of excessive sitting were higher. Demographic patterns in prevalence were similar. Agreement using PABAK statistics was fair-to-moderate for sufficient aerobic activity (0.32–0.49), moderate-to-substantial for inactivity (0.42–0.74), and moderate-to-substantial for excessive sitting (0.49–0.75). As with the PASBAQ, IPAQ-assessed MVPA and sitting each showed graded associations with mental well-being (women: P for trend = 0.003 and 0.004, respectively) and obesity (women: P for trend = 0.007 and 0.014, respectively). Conclusions Capturing habitual physical activity and sedentary behaviour through brief questionnaires is complex. Differences in prevalence estimates can reflect differences in questionnaire structure and content rather than differences in reported behaviour. Treating all IPAQ-assessed walking as moderate-intensity contributed to the differences in prevalence estimates. PASBAQ data will be used for population surveillance every 4 to 5 years. The current version of the Short-form IPAQ was included in HSE 2013–14 to enable more frequent assessment of physical activity and sedentary behaviour; a modified version with different item-ordering and additional questions on walking-pace and effort was included in HSE 2015. PMID:26990093

  3. Digital Holographic Microscopy, a Method for Detection of Microorganisms in Plume Samples from Enceladus and Other Icy Worlds

    PubMed Central

    Bedrossian, Manuel; Lindensmith, Chris

    2017-01-01

    Abstract Detection of extant microbial life on Earth and elsewhere in the Solar System requires the ability to identify and enumerate micrometer-scale, essentially featureless cells. On Earth, bacteria are usually enumerated by culture plating or epifluorescence microscopy. Culture plates require long incubation times and can only count culturable strains, and epifluorescence microscopy requires extensive staining and concentration of the sample and instrumentation that is not readily miniaturized for space. Digital holographic microscopy (DHM) represents an alternative technique with no moving parts and higher throughput than traditional microscopy, making it potentially useful in space for detection of extant microorganisms provided that sufficient numbers of cells can be collected. Because sample collection is expected to be the limiting factor for space missions, especially to outer planets, it is important to quantify the limits of detection of any proposed technique for extant life detection. Here we use both laboratory and field samples to measure the limits of detection of an off-axis digital holographic microscope (DHM). A statistical model is used to estimate any instrument's probability of detection at various bacterial concentrations based on the optical performance characteristics of the instrument, as well as estimate the confidence interval of detection. This statistical model agrees well with the limit of detection of 103 cells/mL that was found experimentally with laboratory samples. In environmental samples, active cells were immediately evident at concentrations of 104 cells/mL. Published estimates of cell densities for Enceladus plumes yield up to 104 cells/mL, which are well within the off-axis DHM's limits of detection to confidence intervals greater than or equal to 95%, assuming sufficient sample volumes can be collected. The quantitative phase imaging provided by DHM allowed minerals to be distinguished from cells. Off-axis DHM's ability for rapid low-level bacterial detection and counting shows its viability as a technique for detection of extant microbial life provided that the cells can be captured intact and delivered to the sample chamber in a sufficient volume of liquid for imaging. Key Words: In situ life detection—Extant microorganisms—Holographic microscopy—Ocean Worlds—Enceladus—Imaging. Astrobiology 17, 913–925. PMID:28708412

  4. Much ado about two: reconsidering retransformation and the two-part model in health econometrics.

    PubMed

    Mullahy, J

    1998-06-01

    In health economics applications involving outcomes (y) and covariates (x), it is often the case that the central inferential problems of interest involve E[y/x] and its associated partial effects or elasticities. Many such outcomes have two fundamental statistical properties: y > or = 0; and the outcome y = 0 is observed with sufficient frequency that the zeros cannot be ignored econometrically. This paper (1) describes circumstances where the standard two-part model with homoskedastic retransformation will fail to provide consistent inferences about important policy parameters; and (2) demonstrates some alternative approaches that are likely to prove helpful in applications.

  5. Multidimensional analysis of data obtained in experiments with X-ray emulsion chambers and extensive air showers

    NASA Technical Reports Server (NTRS)

    Chilingaryan, A. A.; Galfayan, S. K.; Zazyan, M. Z.; Dunaevsky, A. M.

    1985-01-01

    Nonparametric statistical methods are used to carry out the quantitative comparison of the model and the experimental data. The same methods enable one to select the events initiated by the heavy nuclei and to calculate the portion of the corresponding events. For this purpose it is necessary to have the data on artificial events describing the experiment sufficiently well established. At present, the model with the small scaling violation in the fragmentation region is the closest to the experiments. Therefore, the treatment of gamma families obtained in the Pamir' experiment is being carried out at present with the application of these models.

  6. The Clinical Ethnographic Interview: A user-friendly guide to the cultural formulation of distress and help seeking

    PubMed Central

    Arnault, Denise Saint; Shimabukuro, Shizuka

    2013-01-01

    Transcultural nursing, psychiatry, and medical anthropology have theorized that practitioners and researchers need more flexible instruments to gather culturally relevant illness experience, meaning, and help seeking. The state of the science is sufficiently developed to allow standardized yet ethnographically sound protocols for assessment. However, vigorous calls for culturally adapted assessment models have yielded little real change in routine practice. This paper describes the conversion of the Diagnostic and Statistical Manual IV, Appendix I Outline for Cultural Formulation into a user-friendly Clinical Ethnographic Interview (CEI), and provides clinical examples of its use in a sample of highly distressed Japanese women. PMID:22194348

  7. Pulsar statistics and their interpretations

    NASA Technical Reports Server (NTRS)

    Arnett, W. D.; Lerche, I.

    1981-01-01

    It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.

  8. Standard deviation of scatterometer measurements from space.

    NASA Technical Reports Server (NTRS)

    Fischer, R. E.

    1972-01-01

    The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.

  9. Fractal structure enables temporal prediction in music.

    PubMed

    Rankin, Summer K; Fink, Philip W; Large, Edward W

    2014-10-01

    1/f serial correlations and statistical self-similarity (fractal structure) have been measured in various dimensions of musical compositions. Musical performances also display 1/f properties in expressive tempo fluctuations, and listeners predict tempo changes when synchronizing. Here the authors show that the 1/f structure is sufficient for listeners to predict the onset times of upcoming musical events. These results reveal what information listeners use to anticipate events in complex, non-isochronous acoustic rhythms, and this will entail innovative models of temporal synchronization. This finding could improve therapies for Parkinson's and related disorders and inform deeper understanding of how endogenous neural rhythms anticipate events in complex, temporally structured communication signals.

  10. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  11. Base-flow characteristics of streams in the Valley and Ridge, the Blue Ridge, and the Piedmont physiographic provinces of Virginia

    USGS Publications Warehouse

    Nelms, David L.; Harlow, George E.; Hayes, Donald C.

    1997-01-01

    Growth within the Valley and Ridge, Blue Ridge, and Piedmont physiographic provinces of Virginia has focused concern about allocation of surface-water flow and increased demands on the ground-water resources. Potential surface-water yield was determined from statistical analysis of base-flow characteristics of streams. Base-flow characteristics also may provide a relative indication of the potential ground-water yield for areas that lack sufficient specific capacity or will-yield data; however, other factors need to be considered, such as geologic structure, lithology, precipitation, relief, and the degree of hydraulic interconnection between the regolith and bedrock.

  12. Statistical investigation of avalanches of three-dimensional small-world networks and their boundary and bulk cross-sections

    NASA Astrophysics Data System (ADS)

    Najafi, M. N.; Dashti-Naserabadi, H.

    2018-03-01

    In many situations we are interested in the propagation of energy in some portions of a three-dimensional system with dilute long-range links. In this paper, a sandpile model is defined on the three-dimensional small-world network with real dissipative boundaries and the energy propagation is studied in three dimensions as well as the two-dimensional cross-sections. Two types of cross-sections are defined in the system, one in the bulk and another in the system boundary. The motivation of this is to make clear how the statistics of the avalanches in the bulk cross-section tend to the statistics of the dissipative avalanches, defined in the boundaries as the concentration of long-range links (α ) increases. This trend is numerically shown to be a power law in a manner described in the paper. Two regimes of α are considered in this work. For sufficiently small α s the dominant behavior of the system is just like that of the regular BTW, whereas for the intermediate values the behavior is nontrivial with some exponents that are reported in the paper. It is shown that the spatial extent up to which the statistics is similar to the regular BTW model scales with α just like the dissipative BTW model with the dissipation factor (mass in the corresponding ghost model) m2˜α for the three-dimensional system as well as its two-dimensional cross-sections.

  13. Role of spatial inhomogenity in GPCR dimerisation predicted by receptor association-diffusion models

    NASA Astrophysics Data System (ADS)

    Deshpande, Sneha A.; Pawar, Aiswarya B.; Dighe, Anish; Athale, Chaitanya A.; Sengupta, Durba

    2017-06-01

    G protein-coupled receptor (GPCR) association is an emerging paradigm with far reaching implications in the regulation of signalling pathways and therapeutic interventions. Recent super resolution microscopy studies have revealed that receptor dimer steady state exhibits sub-second dynamics. In particular the GPCRs, muscarinic acetylcholine receptor M1 (M1MR) and formyl peptide receptor (FPR), have been demonstrated to exhibit a fast association/dissociation kinetics, independent of ligand binding. In this work, we have developed a spatial kinetic Monte Carlo model to investigate receptor homo-dimerisation at a single receptor resolution. Experimentally measured association/dissociation kinetic parameters and diffusion coefficients were used as inputs to the model. To test the effect of membrane spatial heterogeneity on the simulated steady state, simulations were compared to experimental statistics of dimerisation. In the simplest case the receptors are assumed to be diffusing in a spatially homogeneous environment, while spatial heterogeneity is modelled to result from crowding, membrane micro-domains and cytoskeletal compartmentalisation or ‘corrals’. We show that a simple association-diffusion model is sufficient to reproduce M1MR association statistics, but fails to reproduce FPR statistics despite comparable kinetic constants. A parameter sensitivity analysis is required to reproduce the association statistics of FPR. The model reveals the complex interplay between cytoskeletal components and their influence on receptor association kinetics within the features of the membrane landscape. These results constitute an important step towards understanding the factors modulating GPCR organisation.

  14. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism.

    PubMed

    Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-04-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.

  15. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism

    PubMed Central

    Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-01-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472

  16. Optimism bias leads to inconclusive results - an empirical study

    PubMed Central

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T.; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J.

    2010-01-01

    Objective Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully, and explored whether poor accrual or optimism bias is responsible for inconclusive results. Study Design Systematic review Setting Retrospective analysis of a consecutive series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Results 359 trials (374 comparisons) enrolling 150,232 patients were analyzed. 70% (262/374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273/374) of studies. Investigators’ judgments and statistical inferences were concordant in 75% (279/374) of trials. Investigators consistently overestimated their expected treatment effects, but to a significantly larger extent for inconclusive trials. The median ratio of expected over observed hazard ratio or odds ratio was 1.34 (range 0.19 – 15.40) in conclusive trials compared to 1.86 (range 1.09 – 12.00) in inconclusive studies (p<0.0001). Only 17% of the trials had treatment effects that matched original researchers’ expectations. Conclusion Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. PMID:21163620

  17. Optimism bias leads to inconclusive results-an empirical study.

    PubMed

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J

    2011-06-01

    Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully and explored whether poor accrual or optimism bias is responsible for inconclusive results. Systematic review. Retrospective analysis of a consecutive-series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Three hundred fifty-nine trials (374 comparisons) enrolling 150,232 patients were analyzed. Seventy percent (262 of 374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273 of 374) of studies. Investigators' judgments and statistical inferences were concordant in 75% (279 of 374) of trials. Investigators consistently overestimated their expected treatment effects but to a significantly larger extent for inconclusive trials. The median ratio of expected and observed hazard ratio or odds ratio was 1.34 (range: 0.19-15.40) in conclusive trials compared with 1.86 (range: 1.09-12.00) in inconclusive studies (P<0.0001). Only 17% of the trials had treatment effects that matched original researchers' expectations. Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Temperature variation during apicectomy with Er:YAG laser.

    PubMed

    Bodrumlu, Emre; Keskiner, Ilker; Sumer, Mahmut; Sumer, A Pinar; Telcıoglu, N Tuba

    2012-08-01

    The purpose of this in vitro study was to evaluate the generated temperature of the Er:YAG laser, with three different pulse durations for apicectomy, compared with tungsten bur and surgical saw. Apicectomy is an endodontic surgery performed to remove the root apex and curette adjacent periapical tissue because of lesions of the apical area that are not healing properly. Sixty single-rooted extracted human teeth were resected by three cutting methods: tungsten bur, surgical saw, and Er:YAG laser irradiation with three different pulse durations; pulse duration 50 μs, pulse duration 100 μs, and pulse duration 300 μs. Teflon-insulated, type K thermocouples were used to measure temperature changes during the apicectomy process. Data were analyzed using the general linear models procedure of the SPSS statistical software program. Although there was no statistically significant difference for the mean values of temperature changes at 1 mm away to the cutting site of teeth, there was statistically significant difference among groups for the mean values of temperature changes at 3 mm away to the cutting site of teeth. Additionally, there was statistically significant difference among groups for the total time required for apicectomy. The laser irradiation with pulse duration 50 μs appears to have the lowest temperature rise and the shortest time required for apicectomy of the three pulse durations. However, Er:YAG laser for apicectomy in all pulse durations could be used safely for resection in endodontics in the presence of sufficient water.

  19. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  20. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Interaction effects of metals and salinity on biodegradation of a complex hydrocarbon waste.

    PubMed

    Amatya, Prasanna L; Hettiaratchi, Joseph Patrick A; Joshi, Ramesh C

    2006-02-01

    The presence of high levels of salts because of produced brine water disposal at flare pits and the presence of metals at sufficient concentrations to impact microbial activity are of concern to bioremediation of flare pit waste in the upstream oil and gas industry. Two slurry-phase biotreatment experiments based on three-level factorial statistical experimental design were conducted with a flare pit waste. The experiments separately studied the primary effect of cadmium [Cd(II)] and interaction effect between Cd(II) and salinity and the primary effect of zinc [Zn(II)] and interaction effect between Zn(II) and salinity on hydrocarbon biodegradation. The results showed 42-52.5% hydrocarbon removal in slurries spiked with Cd and 47-62.5% in the slurries spiked with Zn. The analysis of variance showed that the primary effects of Cd and Cd-salinity interaction were statistically significant on hydrocarbon degradation. The primary effects of Zn and the Zn-salinity interaction were statistically insignificant, whereas the quadratic effect of Zn was highly significant on hydrocarbon degradation. The study on effects of metallic chloro-complexes showed that the total aqueous concentration of Cd or Zn does not give a reliable indication of overall toxicity to the microbial activity in the presence of high salinity levels.

  2. Ten reasons why a thermalized system cannot be described by a many-particle wave function

    NASA Astrophysics Data System (ADS)

    Drossel, Barbara

    2017-05-01

    It is widely believed that the underlying reality behind statistical mechanics is a deterministic and unitary time evolution of a many-particle wave function, even though this is in conflict with the irreversible, stochastic nature of statistical mechanics. The usual attempts to resolve this conflict for instance by appealing to decoherence or eigenstate thermalization are riddled with problems. This paper considers theoretical physics of thermalized systems as it is done in practice and shows that all approaches to thermalized systems presuppose in some form limits to linear superposition and deterministic time evolution. These considerations include, among others, the classical limit, extensivity, the concepts of entropy and equilibrium, and symmetry breaking in phase transitions and quantum measurement. As a conclusion, the paper suggests that the irreversibility and stochasticity of statistical mechanics should be taken as a real property of nature. It follows that a gas of a macroscopic number N of atoms in thermal equilibrium is best represented by a collection of N wave packets of a size of the order of the thermal de Broglie wave length, which behave quantum mechanically below this scale but classically sufficiently far beyond this scale. In particular, these wave packets must localize again after scattering events, which requires stochasticity and indicates a connection to the measurement process.

  3. Statistical tests to compare motif count exceptionalities

    PubMed Central

    Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent

    2007-01-01

    Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349

  4. Statistical theory of diffusion in concentrated bcc and fcc alloys and concentration dependencies of diffusion coefficients in bcc alloys FeCu, FeMn, FeNi, and FeCr

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaks, V. G.; Khromov, K. Yu., E-mail: khromov-ky@nrcki.ru; Pankratov, I. R.

    2016-07-15

    The statistical theory of diffusion in concentrated bcc and fcc alloys with arbitrary pairwise interatomic interactions based on the master equation approach is developed. Vacancy–atom correlations are described using both the second-shell-jump and the nearest-neighbor-jump approximations which are shown to be usually sufficiently accurate. General expressions for Onsager coefficients in terms of microscopic interatomic interactions and some statistical averages are given. Both the analytical kinetic mean-field and the Monte Carlo methods for finding these averages are described. The theory developed is used to describe sharp concentration dependencies of diffusion coefficients in several iron-based alloy systems. For the bcc alloys FeCu,more » FeMn, and FeNi, we predict the notable increase of the iron self-diffusion coefficient with solute concentration c, up to several times, even though values of c possible for these alloys do not exceed some percent. For the bcc alloys FeCr at high temperatures T ≳ 1400 K, we show that the very strong and peculiar concentration dependencies of both tracer and chemical diffusion coefficients observed in these alloys can be naturally explained by the theory, without invoking exotic models discussed earlier.« less

  5. Forecasting volatility with neural regression: a contribution to model adequacy.

    PubMed

    Refenes, A N; Holt, W T

    2001-01-01

    Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.

  6. A new hearing protector rating: The Noise Reduction Statistic for use with A weighting (NRSA).

    NASA Astrophysics Data System (ADS)

    Berger, Elliott H.; Gauger, Dan

    2004-05-01

    An important question to ask in regard to hearing protection devices (HPDs) is how much hearing protection they can provide. With respect to the law, at least, this question was answered in 1979 when the U. S. Environmental Protection Agency (EPA) promulgated a labeling regulation specifying a Noise Reduction Rating (NRR) measured in decibels (dB). In the intervening 25 years many concerns have arisen over this regulation. Currently the EPA is considering proposing a revised rule. This report examines the relevant issues in order to provide recommendations for new ratings and a new method of obtaining the test data. The conclusion is that a Noise Reduction Statistic for use with A weighting (NRSA), an A-A' rating computed in a manner that considers both intersubject and interspectrum variation in protection, yields sufficient precision. Two such statistics ought to be specified on the primary package label-the smaller one to indicate the protection that is possible for most users to exceed, and a larger one such that the range between the two numbers conveys to the user the uncertainty in protection provided. Guidance on how to employ these numbers, and a suggestion for an additional, more precise, graphically oriented rating to be provided on a secondary label, are also included.

  7. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    NASA Astrophysics Data System (ADS)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  8. A statistical framework for applying RNA profiling to chemical hazard detection.

    PubMed

    Kostich, Mitchell S

    2017-12-01

    Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.

  9. Falsifiability is not optional.

    PubMed

    LeBel, Etienne P; Berger, Derek; Campbell, Lorne; Loving, Timothy J

    2017-08-01

    Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016's position that "bigger samples are generally better, but . . . that very large samples could have the downside of commandeering resources that would have been better invested in other studies" (abstract). We identify problematic assumptions involved in FER2016's modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  11. Finite Element Analysis of Reverberation Chambers

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.; Nguyen, Duc T.

    2000-01-01

    The primary motivating factor behind the initiation of this work was to provide a deterministic means of establishing the validity of the statistical methods that are recommended for the determination of fields that interact in -an avionics system. The application of finite element analysis to reverberation chambers is the initial step required to establish a reasonable course of inquiry in this particularly data-intensive study. The use of computational electromagnetics provides a high degree of control of the "experimental" parameters that can be utilized in a simulation of reverberating structures. As the work evolved there were four primary focus areas they are: 1. The eigenvalue problem for the source free problem. 2. The development of a complex efficient eigensolver. 3. The application of a source for the TE and TM fields for statistical characterization. 4. The examination of shielding effectiveness in a reverberating environment. One early purpose of this work was to establish the utility of finite element techniques in the development of an extended low frequency statistical model for reverberation phenomena. By employing finite element techniques, structures of arbitrary complexity can be analyzed due to the use of triangular shape functions in the spatial discretization. The effects of both frequency stirring and mechanical stirring are presented. It is suggested that for the low frequency operation the typical tuner size is inadequate to provide a sufficiently random field and that frequency stirring should be used. The results of the finite element analysis of the reverberation chamber illustrate io-W the potential utility of a 2D representation for enhancing the basic statistical characteristics of the chamber when operating in a low frequency regime. The basic field statistics are verified for frequency stirring over a wide range of frequencies. Mechanical stirring is shown to provide an effective frequency deviation.

  12. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  13. Human dynamics scaling characteristics for aerial inbound logistics operation

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Guo, Jin-Li

    2010-05-01

    In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.

  14. Weather extremes in very large, high-resolution ensembles: the weatherathome experiment

    NASA Astrophysics Data System (ADS)

    Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.

    2011-12-01

    Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.

  15. Recruitment of Older Adults: Success May Be in the Details

    PubMed Central

    McHenry, Judith C.; Insel, Kathleen C.; Einstein, Gilles O.; Vidrine, Amy N.; Koerner, Kari M.; Morrow, Daniel G.

    2015-01-01

    Purpose: Describe recruitment strategies used in a randomized clinical trial of a behavioral prospective memory intervention to improve medication adherence for older adults taking antihypertensive medication. Results: Recruitment strategies represent 4 themes: accessing an appropriate population, communication and trust-building, providing comfort and security, and expressing gratitude. Recruitment activities resulted in 276 participants with a mean age of 76.32 years, and study enrollment included 207 women, 69 men, and 54 persons representing ethnic minorities. Recruitment success was linked to cultivating relationships with community-based organizations, face-to-face contact with potential study participants, and providing service (e.g., blood pressure checks) as an access point to eligible participants. Seventy-two percent of potential participants who completed a follow-up call and met eligibility criteria were enrolled in the study. The attrition rate was 14.34%. Implications: The projected increase in the number of older adults intensifies the need to study interventions that improve health outcomes. The challenge is to recruit sufficient numbers of participants who are also representative of older adults to test these interventions. Failing to recruit a sufficient and representative sample can compromise statistical power and the generalizability of study findings. PMID:22899424

  16. Evaluation of the Association between Persistent Organic ...

    EPA Pesticide Factsheets

    Background: Diabetes is a major threat to public health in the United States and worldwide. Understanding the role of environmental chemicals in the development or progression of diabetes is an emerging issue in environmental health.Objective: We assessed the epidemiologic literature for evidence of associations between persistent organic pollutants (POPs) and type 2 diabetes.Methods: Using a PubMed search and reference lists from relevant studies or review articles, we identified 72 epidemiological studies that investigated associations of persistent organic pollutants (POPs) with diabetes. We evaluated these studies for consistency, strengths and weaknesses of study design (including power and statistical methods), clinical diagnosis, exposure assessment, study population characteristics, and identification of data gaps and areas for future research.Conclusions: Heterogeneity of the studies precluded conducting a meta-analysis, but the overall evidence is sufficient for a positive association of some organochlorine POPs with type 2 diabetes. Collectively, these data are not sufficient to establish causality. Initial data mining revealed that the strongest positive correlation of diabetes with POPs occurred with organochlorine compounds, such as trans-nonachlor, dichlorodiphenyldichloroethylene (DDE), polychlorinated biphenyls (PCBs), and dioxins and dioxin-like chemicals. There is less indication of an association between other nonorganochlorine POPs, such as

  17. Space station integrated wall damage and penetration damage control. Task 5: Space debris measurement, mapping and characterization system

    NASA Technical Reports Server (NTRS)

    Lempriere, B. M.

    1987-01-01

    The procedures and results of a study of a conceptual system for measuring the debris environment on the space station is discussed. The study was conducted in two phases: the first consisted of experiments aimed at evaluating location of impact through panel response data collected from acoustic emission sensors; the second analyzed the available statistical description of the environment to determine the probability of the measurement system producing useful data, and analyzed the results of the previous tests to evaluate the accuracy of location and the feasibility of extracting impactor characteristics from the panel response. The conclusions were that for one panel the system would not be exposed to any event, but that the entire Logistics Module would provide a modest amount of data. The use of sensors with higher sensitivity than those used in the tests could be advantageous. The impact location could be found with sufficient accuracy from panel response data. The waveforms of the response were shown to contain information on the impact characteristics, but the data set did not span a sufficient range of the variables necessary to evaluate the feasibility of extracting the information.

  18. Community reintegration in rehabilitated South Indian persons with spinal cord injury.

    PubMed

    Samuelkamaleshkumar, Selvaraj; Radhika, Somasundaram; Cherian, Binu; Elango, Aarumugam; Winrose, Windsor; Suhany, Baby T; Prakash, M Henry

    2010-07-01

    To explore community reintegration in rehabilitated South Indian persons with spinal cord injury (SCI) and to compare the level of community reintegration based on demographic variables. Survey. Rehabilitation center of a tertiary care university teaching hospital. Community-dwelling persons with SCI (N=104). Not applicable. Craig Handicap Assessment and Reporting Technique (CHART). The mean scores for each CHART domain were physical independence 98+/-5, social Integration 96+/-11, cognitive independence 92+/-17, occupation 70+/-34, mobility 65+/-18, and economic self sufficiency 53+/-40. Demographic variables showed no statistically significant difference with any of the CHART domains except for age and mobility, level of education, and social integration. Persons with SCI in rural South India who have completed comprehensive, mostly self-financed, rehabilitation with an emphasis on achieving functional ambulation, family support, and self-employment and who attend a regular annual follow-up show a high level of community reintegration in physical independence, social integration, and cognitive independence. CHART scores in the domains of occupation, mobility, and economic self-sufficiency showed lower levels of community reintegration. Copyright 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. Sufficient Condition for Finite-Time Singularity in a High-Symmetry Euler Flow

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, A.; Ng, C. S.

    1997-11-01

    The possibility of a finite-time singularity (FTS) with a smooth initial condition is considered in a high-symmetry Euler flow (the Kida flow). It has been shown recently [C. S. Ng and A. Bhattacharjee, Phys. Rev. E 54 1530, 1996] that there must be a FTS if the fourth order pressure derivative (p_xxxx) is always positive within a finite range X on the x-axis around the origin. This sufficient condition is now extended to the case when the range X is itself time-dependent. It is shown that a FTS must still exist even when X arrow 0 if the p_xxxx value at the origin is growing faster than X-2. It is tested statistically that p_xxxx at the origin is most probably positive for a Kida flow with random Fourier amplitudes and that it is generally growing as energy cascades to Fourier modes with higher wavenumbers k. The condition that p_xxxx grows faster than X-2 is found to be satisfied when the spectral index ν of the energy spectrum E(k) ∝ k^-ν of the random flow is less than 3.

  20. Silicon drift detectors as a tool for time-resolved fluorescence XAFS on low-concentrated samples in catalysis.

    PubMed

    Kappen, Peter; Tröger, Larc; Materlik, Gerhard; Reckleben, Christian; Hansen, Karsten; Grunwaldt, Jan-Dierk; Clausen, Bjerne S

    2002-07-01

    A silicon drift detector (SDD) was used for ex situ and time-resolved in situ fluorescence X-ray absorption fine structure (XAFS) on low-concentrated catalyst samples. For a single-element and a seven-element SDD the energy resolution and the peak-to-background ratio were verified at high count rates, sufficient for fluorescence XAFS. An experimental set-up including the seven-element SDD without any cooling and an in situ cell with gas supply and on-line gas analysis was developed. With this set-up the reduction and oxidation of a zeolite supported catalyst containing 0.3 wt% platinum was followed by fluorescence near-edge scans with a time resolution of 10 min each. From ex situ experiments on low-concentrated platinum- and gold-based catalysts fluorescence XAFS scans could be obtained with sufficient statistical quality for a quantitative analysis. Structural information on the gold and platinum particles could be extracted by both the Fourier transforms and the near-edge region of the XAFS spectra. Moreover, it was found that with the seven-element SDD concentrations of the element of interest as low as 100 ppm can be examined by fluorescence XAFS.

  1. No independent association between insufficient sleep and childhood obesity in the National Survey of Children's Health.

    PubMed

    Hassan, Fauziya; Davis, Matthew M; Chervin, Ronald D

    2011-04-15

    Prior studies have supported an association between insufficient sleep and childhood obesity, but most have not examined nationally representative samples or considered potential sociodemographic confounders. The main objective of this study was to use a large, nationally representative dataset to examine the possibility that insufficient sleep is associated with obesity in children, independent of sociodemographic factors. The National Survey of Children's Health is a national survey of U.S. households contacted by random digit dialing. In 2003, caregivers of 102,353 US children were surveyed. Age- and sex-specific body mass index (BMI) based on parental report of child height and weight, was available for 81,390 children aged 6-17 years. Caregivers were asked, "How many nights of sufficient sleep did your child have in the past week?" The odds of obesity (BMI ≥ 95th percentile) versus healthy weight (BMI 5th-84th percentile) was regressed on reported nights of sufficient sleep per week (categorized as 0-2, 3-5, or 6-7). Sociodemographic variables included gender, race, household education, and family income. Analyses incorporated sampling weights to derive nationally representative estimates for a 2003 population of 34 million youth. Unadjusted bivariate analyses indicated that children aged 6-11 years with 0-2 nights of sufficient sleep, in comparison to those with 6-7 nights, were more likely to be obese (OR = 1.7, 95% CI [1.2-2.3]). Among children aged 12-17 years, odds of obesity were lower among children with 3-5 nights of sufficient sleep in comparison to those with 6-7 nights (0.8, 95% CI: 0.7-0.9). However, in both age groups, adjustment for race/ethnicity, gender, family income, and household education left no remaining statistical significance for the association between sufficient nights of sleep and BMI. In this national sample, insufficient sleep, as judged by parents, is inconsistently associated with obesity in bivariate analyses, and not associated with obesity after adjustment for sociodemographic variables. These findings from a nationally representative sample are necessarily subject to parental perceptions, but nonetheless serve as an important reminder that the role of insufficient sleep in the childhood obesity epidemic remains unproven.

  2. Feature-Based Statistical Analysis of Combustion Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, J; Krishnamoorthy, V; Liu, S

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing andmore » reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion science; however, it is applicable to many other science domains.« less

  3. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data

    PubMed Central

    Chen, Yi-Hau

    2017-01-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA. PMID:28622336

  4. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    PubMed

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA.

  5. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  6. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  7. NURE aerial gamma-ray and magnetic reconnaissance survey of portions of New Mexico, Arizona and Texas. Volume II. New Mexico-Roswell NI 13-8 quadrangle. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The results of a high-sensitivity, aerial gamma-ray spectrometer and magnetometer survey of the Roswell two degree quadrangle, New Mexico are presented. Instrumentation and methods are described in Volume I of this final report. The work was done by Carson Helicopters, Inc., and Carson Helicopters was assisted in the interpretation by International Exploration, Inc. The work was performed for the US Department of Energy - National Uranium Resource Evaluation (NURE) program. Analysis of this radiometric data yielded 238 statistically significant eU anomalies. Of this number, seventy-four were considered to be sufficient strength to warrant further investigation.

  8. An absolute chronology for early Egypt using radiocarbon dating and Bayesian statistical modelling

    PubMed Central

    Dee, Michael; Wengrow, David; Shortland, Andrew; Stevenson, Alice; Brock, Fiona; Girdland Flink, Linus; Bronk Ramsey, Christopher

    2013-01-01

    The Egyptian state was formed prior to the existence of verifiable historical records. Conventional dates for its formation are based on the relative ordering of artefacts. This approach is no longer considered sufficient for cogent historical analysis. Here, we produce an absolute chronology for Early Egypt by combining radiocarbon and archaeological evidence within a Bayesian paradigm. Our data cover the full trajectory of Egyptian state formation and indicate that the process occurred more rapidly than previously thought. We provide a timeline for the First Dynasty of Egypt of generational-scale resolution that concurs with prevailing archaeological analysis and produce a chronometric date for the foundation of Egypt that distinguishes between historical estimates. PMID:24204188

  9. Refraction of coastal ocean waves

    NASA Technical Reports Server (NTRS)

    Shuchman, R. A.; Kasischke, E. S.

    1981-01-01

    Refraction of gravity waves in the coastal area off Cape Hatteras, NC as documented by synthetic aperture radar (SAR) imagery from Seasat orbit 974 (collected on September 3, 1978) is discussed. An analysis of optical Fourier transforms (OFTs) from more than 70 geographical positions yields estimates of wavelength and wave direction for each position. In addition, independent estimates of the same two quantities are calculated using two simple theoretical wave-refraction models. The OFT results are then compared with the theoretical results. A statistical analysis shows a significant degree of linear correlation between the data sets. This is considered to indicate that the Seasat SAR produces imagery whose clarity is sufficient to show the refraction of gravity waves in shallow water.

  10. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  11. Vocational students' learning preferences: the interpretability of ipsative data.

    PubMed

    Smith, P J

    2000-02-01

    A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.

  12. Results from the MWA EoR Experiment

    NASA Astrophysics Data System (ADS)

    Webster, Rachel L.; MWA EoR Collaboration

    2018-05-01

    The MWA EoR is one of a small handful of experiments designed to detect the statistical signal from the Epoch of Reionisation. Each of these experiments has reached a level of maturity, where the challenges, in particular of foreground removal, are being more fully understood. Over the past decade, the MWA EoR Collaboration has developed expertise and an understanding of the elements of the telescope array, the end-to-end pipelines, ionospheric conditions, and and the foreground emissions. Sufficient data has been collected to detect the theoretically predicted EoR signal. Limits have been published regularly, however we still several orders of magnitude from a possible detection. This paper outlines recent progress and indicates directions for future efforts.

  13. Possible Explanation for Cancer in Rats due to Cell Phone Radio Frequency Radiation

    NASA Astrophysics Data System (ADS)

    Feldman, Bernard J.

    Very recently, the National Toxicology Program reported a correlation between exposure to whole body 900 MHz radio frequency radiation and cancer in the brains and hearts of Sprague Dawley male rats. Assuming that the National Toxicology Program is statistically significant, I propose the following explanation for these results. The neurons around the brain and heart form closed electrical circuits and, following Faraday's Law, 900 MHz radio frequency radiation induces 900 MHz electrical currents in these neural circuits. In turn, these 900 MHz currents in the neural circuits generate sufficient localized heat in the neural cells to shift the equilibrium concentration of carcinogenic radicals to higher levels and thus, to higher incidences of cancer.

  14. An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chien, T. T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.

  15. Rule of Thumb Proposing the Size of Aperture Expected to be Sufficient to Resolve Double Stars with Given Parameters

    NASA Astrophysics Data System (ADS)

    Knapp, Wilfried

    2018-01-01

    Visual observation of double stars is an anachronistic passion especially attractive for amateurs looking for sky objects suitable for visual observation even in light polluted areas. Session planning then requires a basic idea which objects might be suitable for a given equipment—this question is a long term issue for visual double star observers and obviously not easy to answer, especially for unequal bright components. Based on a reasonably large database with limited aperture observations (done with variable aperture equipment iris diaphragm or aperture masks) a heuristic approach is used to derive a statistically well founded Rule of Thumb formula.

  16. Computer-assisted detection of epileptiform focuses on SPECT images

    NASA Astrophysics Data System (ADS)

    Grzegorczyk, Dawid; Dunin-Wąsowicz, Dorota; Mulawka, Jan J.

    2010-09-01

    Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.

  17. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  18. MEMS reliability: The challenge and the promise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, W.M.; Tanner, D.M.; Miller, S.L.

    1998-05-01

    MicroElectroMechanical Systems (MEMS) that think, sense, act and communicate will open up a broad new array of cost effective solutions only if they prove to be sufficiently reliable. A valid reliability assessment of MEMS has three prerequisites: (1) statistical significance; (2) a technique for accelerating fundamental failure mechanisms, and (3) valid physical models to allow prediction of failures during actual use. These already exist for the microelectronics portion of such integrated systems. The challenge lies in the less well understood micromachine portions and its synergistic effects with microelectronics. This paper presents a methodology addressing these prerequisites and a description ofmore » the underlying physics of reliability for micromachines.« less

  19. Protein abundances can distinguish between naturally-occurring and laboratory strains of Yersinia pestis, the causative agent of plague

    DOE PAGES

    Merkley, Eric D.; Sego, Landon H.; Lin, Andy; ...

    2017-08-30

    Adaptive processes in bacterial species can occur rapidly in laboratory culture, leading to genetic divergence between naturally occurring and laboratory-adapted strains. Differentiating wild and closely-related laboratory strains is clearly important for biodefense and bioforensics; however, DNA sequence data alone has thus far not provided a clear signature, perhaps due to lack of understanding of how diverse genome changes lead to adapted phenotypes. Protein abundance profiles from mass spectrometry-based proteomics analyses are a molecular measure of phenotype. Proteomics data contains sufficient information that powerful statistical methods can uncover signatures that distinguish wild strains of Yersinia pestis from laboratory-adapted strains.

  20. Mandate-Based Health Reform and the Labor Market: Evidence from the Massachusetts Reform*

    PubMed Central

    Kolstad, Jonathan T.; Kowalski, Amanda E.

    2016-01-01

    We model the labor market impact of the key provisions of the national and Massachusetts “mandate-based” health reforms: individual mandates, employer mandates, and subsidies. We characterize the compensating differential for employer-sponsored health insurance (ESHI) and the welfare impact of reform in terms of “sufficient statistics.” We compare welfare under mandate-based reform to welfare in a counterfactual world where individuals do not value ESHI. Relying on the Massachusetts reform, we find that jobs with ESHI pay $2,812 less annually, somewhat less than the cost of ESHI to employers. Accordingly, the deadweight loss of mandate-based health reform was approximately 8 percent of its potential size. PMID:27037897

  1. Investigation of Structure in the Light Curves of a Sample of Newly Discovered Long Period Variable Stars

    NASA Astrophysics Data System (ADS)

    Craine, E. R.; Culver, R. B.; Eykholt, R.; Flurchick, K. M.; Kraus, A. L.; Tucker, R. A.; Walker, D. K.

    2015-09-01

    Long period variable stars exhibit hump structures, and possibly flares, in their light curves. While the existence of humps is not controversial, the presence of flaring activity is less clear. Mining of a sky survey database of new variable star discoveries (the first MOTESS-GNAT Variable Star Catalog (MG1-VSC)) has led to identification of 47 such stars for which there are sufficient data to explore the presence of anomalous light curve features. We find a number of hump structures, and see one possible flare, suggesting that they are rare events. We present light curves and measured parameters for these stars, and a population statistical analysis.

  2. Conservative classical and quantum resolution limits for incoherent imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-06-01

    I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.

  3. Sharp predictions from eternal inflation patches in D-brane inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertog, Thomas; Janssen, Oliver, E-mail: thomas.hertog@fys.kuleuven.be, E-mail: opj202@nyu.edu

    We numerically generate the six-dimensional landscape of D3-brane inflation and identify patches of eternal inflation near sufficiently flat inflection points of the potential. We show that reasonable measures that select patches of eternal inflation in the landscape yield sharp predictions for the spectral properties of primordial perturbations on observable scales. These include a scalar tilt of .936, a running of the scalar tilt −.00103, undetectably small tensors and non-Gaussianity, and no observable spatial curvature. Our results explicitly demonstrate that precision cosmology probes the combination of the statistical properties of the string landscape and the measure implied by the universe's quantummore » state.« less

  4. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.

    PubMed

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M

    2016-07-14

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  5. Household participation in recycling programs: a case study from Turkey.

    PubMed

    Budak, Fuat; Oguz, Burcu

    2008-11-01

    This study investigates the underlining factors that motivate households to participate in a pilot source separation and recycling program in Turkey. The data of this research were collected from randomly selected households in the program area via face to face interviews based on an inclusive questionnaire. The results of logistic regression analysis show that having sufficient knowledge regarding recycling and the recycling program is the most statistically significant factor in determining whether a household will participate in recycling. The results also imply that some of the socio-economic and demographic characteristics of household hypothesized to affect the household decision to participate in recycling, in the research framework, are not significant.

  6. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis

    NASA Astrophysics Data System (ADS)

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.

    2016-07-01

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  7. Effects of different centrifugation conditions on clinical chemistry and Immunology test results.

    PubMed

    Minder, Elisabeth I; Schibli, Adrian; Mahrer, Dagmar; Nesic, Predrag; Plüer, Kathrin

    2011-05-10

    The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study.

  8. Effects of different centrifugation conditions on clinical chemistry and Immunology test results

    PubMed Central

    2011-01-01

    Background The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. Methods We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Results Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. Conclusion A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study. PMID:21569233

  9. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves.

    PubMed

    Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J

    2012-02-01

    The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.

  10. Characterisation of seasonal flood types according to timescales in mixed probability distributions

    NASA Astrophysics Data System (ADS)

    Fischer, Svenja; Schumann, Andreas; Schulte, Markus

    2016-08-01

    When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.

  11. Does reviewing lead to better learning and decision making? Answers from a randomized stock market experiment.

    PubMed

    Wessa, Patrick; Holliday, Ian E

    2012-01-01

    The literature is not univocal about the effects of Peer Review (PR) within the context of constructivist learning. Due to the predominant focus on using PR as an assessment tool, rather than a constructivist learning activity, and because most studies implicitly assume that the benefits of PR are limited to the reviewee, little is known about the effects upon students who are required to review their peers. Much of the theoretical debate in the literature is focused on explaining how and why constructivist learning is beneficial. At the same time these discussions are marked by an underlying presupposition of a causal relationship between reviewing and deep learning. The purpose of the study is to investigate whether the writing of PR feedback causes students to benefit in terms of: perceived utility about statistics, actual use of statistics, better understanding of statistical concepts and associated methods, changed attitudes towards market risks, and outcomes of decisions that were made. We conducted a randomized experiment, assigning students randomly to receive PR or non-PR treatments and used two cohorts with a different time span. The paper discusses the experimental design and all the software components that we used to support the learning process: Reproducible Computing technology which allows students to reproduce or re-use statistical results from peers, Collaborative PR, and an AI-enhanced Stock Market Engine. The results establish that the writing of PR feedback messages causes students to experience benefits in terms of Behavior, Non-Rote Learning, and Attitudes, provided the sequence of PR activities are maintained for a period that is sufficiently long.

  12. The relative effects of habitat loss and fragmentation on population genetic variation in the red-cockaded woodpecker (Picoides borealis).

    PubMed

    Bruggeman, Douglas J; Wiegand, Thorsten; Fernández, Néstor

    2010-09-01

    The relative influence of habitat loss, fragmentation and matrix heterogeneity on the viability of populations is a critical area of conservation research that remains unresolved. Using simulation modelling, we provide an analysis of the influence both patch size and patch isolation have on abundance, effective population size (N(e)) and F(ST). An individual-based, spatially explicit population model based on 15 years of field work on the red-cockaded woodpecker (Picoides borealis) was applied to different landscape configurations. The variation in landscape patterns was summarized using spatial statistics based on O-ring statistics. By regressing demographic and genetics attributes that emerged across the landscape treatments against proportion of total habitat and O-ring statistics, we show that O-ring statistics provide an explicit link between population processes, habitat area, and critical thresholds of fragmentation that affect those processes. Spatial distances among land cover classes that affect biological processes translated into critical scales at which the measures of landscape structure correlated best with genetic indices. Therefore our study infers pattern from process, which contrasts with past studies of landscape genetics. We found that population genetic structure was more strongly affected by fragmentation than population size, which suggests that examining only population size may limit recognition of fragmentation effects that erode genetic variation. If effective population size is used to set recovery goals for endangered species, then habitat fragmentation effects may be sufficiently strong to prevent evaluation of recovery based on the ratio of census:effective population size alone.

  13. Corpus-based Statistical Screening for Phrase Identification

    PubMed Central

    Kim, Won; Wilbur, W. John

    2000-01-01

    Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469

  14. Socio-Demographic and Clinical Characteristics are Not Clinically Useful Predictors of Refill Adherence in Patients with Hypertension

    PubMed Central

    Steiner, John F.; Ho, P. Michael; Beaty, Brenda L.; Dickinson, L. Miriam; Hanratty, Rebecca; Zeng, Chan; Tavel, Heather M.; Havranek, Edward P.; Davidson, Arthur J.; Magid, David J.; Estacio, Raymond O.

    2009-01-01

    Background Although many studies have identified patient characteristics or chronic diseases associated with medication adherence, the clinical utility of such predictors has rarely been assessed. We attempted to develop clinical prediction rules for adherence with antihypertensive medications in two health care delivery systems. Methods and Results Retrospective cohort studies of hypertension registries in an inner-city health care delivery system (N = 17176) and a health maintenance organization (N = 94297) in Denver, Colorado. Adherence was defined by acquisition of 80% or more of antihypertensive medications. A multivariable model in the inner-city system found that adherent patients (36.3% of the total) were more likely than non-adherent patients to be older, white, married, and acculturated in US society, to have diabetes or cerebrovascular disease, not to abuse alcohol or controlled substances, and to be prescribed less than three antihypertensive medications. Although statistically significant, all multivariate odds ratios were 1.7 or less, and the model did not accurately discriminate adherent from non-adherent patients (C-statistic = 0.606). In the health maintenance organization, where 72.1% of patients were adherent, significant but weak associations existed between adherence and older age, white race, the lack of alcohol abuse, and fewer antihypertensive medications. The multivariate model again failed to accurately discriminate adherent from non-adherent individuals (C-statistic = 0.576). Conclusions Although certain socio-demographic characteristics or clinical diagnoses are statistically associated with adherence to refills of antihypertensive medications, a combination of these characteristics is not sufficiently accurate to allow clinicians to predict whether their patients will be adherent with treatment. PMID:20031876

  15. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. The effect of coenzyme Q10 in statin myopathy.

    PubMed

    Zlatohlavek, Lukas; Vrablik, Michal; Grauova, Barbora; Motykova, Eva; Ceska, Richard

    2012-01-01

    Statins significantly reduce CV morbidity and mortality. Unfortunately, one of the side effects of statins is myopathy, for which statins cannot be administered in sufficient doses or administered at all. The aim of this study was to demonstrate the effect of coenzyme Q10 in patients with statin myopathy. Twenty eight patients aged 60.6±10.7 years were monitored (18 women and 10 men) and treated with different types and doses of statin. Muscle weakness and pain was monitored using a scale of one to ten, on which patients expressed the degree of their inconvenience. Examination of muscle problems was performed prior to administration of CQ10 and after 3 and 6 months of dosing. Statistical analysis was performed using Friedman test, Annova and Students t-test. Pain decreased on average by 53.8% (p<0.0001), muscle weakness by 44.4% (p<0.0001). The CQ10 levels were increased by more than 194% (from 0,903 μg/ml to 2.66 μg/ml; p<0.0001). After a six-month administration of coenzyme Q10, muscle pain and sensitivity statistically significantly decreased.

  18. Breaking time reversal in a simple smooth chaotic system.

    PubMed

    Tomsovic, Steven; Ullmo, Denis; Nagano, Tatsuro

    2003-06-01

    Within random matrix theory, the statistics of the eigensolutions depend fundamentally on the presence (or absence) of time reversal symmetry. Accepting the Bohigas-Giannoni-Schmit conjecture, this statement extends to quantum systems with chaotic classical analogs. For practical reasons, much of the supporting numerical studies of symmetry breaking have been done with billiards or maps, and little with simple, smooth systems. There are two main difficulties in attempting to break time reversal invariance in a continuous time system with a smooth potential. The first is avoiding false time reversal breaking. The second is locating a parameter regime in which the symmetry breaking is strong enough to transform the fluctuation properties fully to the broken symmetry case, and yet remain weak enough so as not to regularize the dynamics sufficiently that the system is no longer chaotic. We give an example of a system of two coupled quartic oscillators whose energy level statistics closely match with those of the Gaussian unitary ensemble, and which possesses only a minor proportion of regular motion in its phase space.

  19. The accurate assessment of small-angle X-ray scattering data

    DOE PAGES

    Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...

    2015-01-23

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less

  20. Observation of prethermalization in long-range interacting spin chains

    PubMed Central

    Neyenhuis, Brian; Zhang, Jiehang; Hess, Paul W.; Smith, Jacob; Lee, Aaron C.; Richerme, Phil; Gong, Zhe-Xuan; Gorshkov, Alexey V.; Monroe, Christopher

    2017-01-01

    Although statistical mechanics describes thermal equilibrium states, these states may or may not emerge dynamically for a subsystem of an isolated quantum many-body system. For instance, quantum systems that are near-integrable usually fail to thermalize in an experimentally realistic time scale, and instead relax to quasi-stationary prethermal states that can be described by statistical mechanics, when approximately conserved quantities are included in a generalized Gibbs ensemble (GGE). We experimentally study the relaxation dynamics of a chain of up to 22 spins evolving under a long-range transverse-field Ising Hamiltonian following a sudden quench. For sufficiently long-range interactions, the system relaxes to a new type of prethermal state that retains a strong memory of the initial conditions. However, the prethermal state in this case cannot be described by a standard GGE; it rather arises from an emergent double-well potential felt by the spin excitations. This result shows that prethermalization occurs in a broader context than previously thought, and reveals new challenges for a generic understanding of the thermalization of quantum systems, particularly in the presence of long-range interactions. PMID:28875166

  1. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  2. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery

    PubMed Central

    Buscombe, Daniel; Wheaton, Joseph M.

    2018-01-01

    Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449

  3. Bi-PROF

    PubMed Central

    Gries, Jasmin; Schumacher, Dirk; Arand, Julia; Lutsik, Pavlo; Markelova, Maria Rivera; Fichtner, Iduna; Walter, Jörn; Sers, Christine; Tierling, Sascha

    2013-01-01

    The use of next generation sequencing has expanded our view on whole mammalian methylome patterns. In particular, it provides a genome-wide insight of local DNA methylation diversity at single nucleotide level and enables the examination of single chromosome sequence sections at a sufficient statistical power. We describe a bisulfite-based sequence profiling pipeline, Bi-PROF, which is based on the 454 GS-FLX Titanium technology that allows to obtain up to one million sequence stretches at single base pair resolution without laborious subcloning. To illustrate the performance of the experimental workflow connected to a bioinformatics program pipeline (BiQ Analyzer HT) we present a test analysis set of 68 different epigenetic marker regions (amplicons) in five individual patient-derived xenograft tissue samples of colorectal cancer and one healthy colon epithelium sample as a control. After the 454 GS-FLX Titanium run, sequence read processing and sample decoding, the obtained alignments are quality controlled and statistically evaluated. Comprehensive methylation pattern interpretation (profiling) assessed by analyzing 102-104 sequence reads per amplicon allows an unprecedented deep view on pattern formation and methylation marker heterogeneity in tissues concerned by complex diseases like cancer. PMID:23803588

  4. Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI

    NASA Astrophysics Data System (ADS)

    Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.

    2014-01-01

    A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.

  5. On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit

    2012-12-01

    This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.

  6. Using high-resolution variant frequencies to empower clinical genome interpretation.

    PubMed

    Whiffin, Nicola; Minikel, Eric; Walsh, Roddy; O'Donnell-Luria, Anne H; Karczewski, Konrad; Ing, Alexander Y; Barton, Paul J R; Funke, Birgit; Cook, Stuart A; MacArthur, Daniel; Ware, James S

    2017-10-01

    PurposeWhole-exome and whole-genome sequencing have transformed the discovery of genetic variants that cause human Mendelian disease, but discriminating pathogenic from benign variants remains a daunting challenge. Rarity is recognized as a necessary, although not sufficient, criterion for pathogenicity, but frequency cutoffs used in Mendelian analysis are often arbitrary and overly lenient. Recent very large reference datasets, such as the Exome Aggregation Consortium (ExAC), provide an unprecedented opportunity to obtain robust frequency estimates even for very rare variants.MethodsWe present a statistical framework for the frequency-based filtering of candidate disease-causing variants, accounting for disease prevalence, genetic and allelic heterogeneity, inheritance mode, penetrance, and sampling variance in reference datasets.ResultsUsing the example of cardiomyopathy, we show that our approach reduces by two-thirds the number of candidate variants under consideration in the average exome, without removing true pathogenic variants (false-positive rate<0.001).ConclusionWe outline a statistically robust framework for assessing whether a variant is "too common" to be causative for a Mendelian disorder of interest. We present precomputed allele frequency cutoffs for all variants in the ExAC dataset.

  7. Perception-based road hazard identification with Internet support.

    PubMed

    Tarko, Andrew P; DeSalle, Brian R

    2003-01-01

    One of the most important tasks faced by highway agencies is identifying road hazards. Agencies use crash statistics to detect road intersections and segments where the frequency of crashes is excessive. With the crash-based method, a dangerous intersection or segment can be pointed out only after a sufficient number of crashes occur. A more proactive method is needed, and motorist complaints may be able to assist agencies in detecting road hazards before crashes occur. This paper investigates the quality of safety information reported by motorists and the effectiveness of hazard identification based on motorist reports, which were collected with an experimental Internet website. It demonstrates that the intersections pointed out by motorists tended to have more crashes than other intersections. The safety information collected through the website was comparable to 2-3 months of crash data. It was concluded that although the Internet-based method could not substitute for the traditional crash-based methods, its joint use with crash statistics might be useful in detecting new hazards where crash data had been collected for a short time.

  8. Propagation of radially polarized multi-cosine Gaussian Schell-model beams in non-Kolmogorov turbulence

    NASA Astrophysics Data System (ADS)

    Tang, Miaomiao; Zhao, Daomu; Li, Xinzhong; Wang, Jingge

    2018-01-01

    Recently, we introduced a new class of radially polarized beams with multi-cosine Gaussian Schell-model(MCGSM) correlation function based on the partially coherent theory (Tang et al., 2017). In this manuscript, we extend the work to study the statistical properties such as the spectral density, the degree of coherence, the degree of polarization, and the state of polarization of the beam propagating in isotropic turbulence with a non-Kolmogorov power spectrum. Analytical formulas for the cross-spectral density matrix elements of a radially polarized MCGSM beam in non-Kolmogorov turbulence are derived. Numerical results show that lattice-like intensity pattern of the beam, which keeps propagation-invariant in free space, is destroyed by the turbulence when it passes at sufficiently large distances from the source. It is also shown that the polarization properties are mainly affected by the source correlation functions, and change in the turbulent statistics plays a relatively small effect. In addition, the polarization state exhibits self-splitting property and each beamlet evolves into radially polarized structure upon propagation.

  9. A simple formula for insertion loss prediction of large acoustical enclosures using statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Sil; Kim, Jae-Seung; Lee, Seong-Hyun; Seo, Yun-Ho

    2014-12-01

    Insertion loss prediction of large acoustical enclosures using Statistical Energy Analysis (SEA) method is presented. The SEA model consists of three elements: sound field inside the enclosure, vibration energy of the enclosure panel, and sound field outside the enclosure. It is assumed that the space surrounding the enclosure is sufficiently large so that there is no energy flow from the outside to the wall panel or to air cavity inside the enclosure. The comparison of the predicted insertion loss to the measured data for typical large acoustical enclosures shows good agreements. It is found that if the critical frequency of the wall panel falls above the frequency region of interest, insertion loss is dominated by the sound transmission loss of the wall panel and averaged sound absorption coefficient inside the enclosure. However, if the critical frequency of the wall panel falls into the frequency region of interest, acoustic power from the sound radiation by the wall panel must be added to the acoustic power from transmission through the panel.

  10. Random ambience using high fidelity images

    NASA Astrophysics Data System (ADS)

    Abu, Nur Azman; Sahib, Shahrin

    2011-06-01

    Most of the secure communication nowadays mandates true random keys as an input. These operations are mostly designed and taken care of by the developers of the cryptosystem. Due to the nature of confidential crypto development today, pseudorandom keys are typically designed and still preferred by the developers of the cryptosystem. However, these pseudorandom keys are predictable, periodic and repeatable, hence they carry minimal entropy. True random keys are believed to be generated only via hardware random number generators. Careful statistical analysis is still required to have any confidence the process and apparatus generates numbers that are sufficiently random to suit the cryptographic use. In this underlying research, each moment in life is considered unique in itself. The random key is unique for the given moment generated by the user whenever he or she needs the random keys in practical secure communication. An ambience of high fidelity digital image shall be tested for its randomness according to the NIST Statistical Test Suite. Recommendation on generating a simple 4 megabits per second random cryptographic keys live shall be reported.

  11. Correlation and agreement: overview and clarification of competing concepts and measures.

    PubMed

    Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M

    2016-04-25

    Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.

  12. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  13. Statistical Methodologies for the Optimization of Lipase and Biosurfactant by Ochrobactrum intermedium Strain MZV101 in an Identical Medium for Detergent Applications.

    PubMed

    Ebrahimipour, Gholamhossein; Sadeghi, Hossein; Zarinviarsagh, Mina

    2017-09-11

    The Plackett-Burman design and the Box-Behnken design, statistical methodologies, were employed for the optimization lipase and biosurfactant production by Ochrobactrum intermedium strain MZV101 in an identical broth medium for detergent applications. Environmental factor pH determined to be most mutual significant variables on production. A high concentration of molasses at high temperature and pH has a negative effect on lipase and biosurfactant production by O. intermedium strain MZV101. The chosen mathematical method of medium optimization was sufficient for improving the industrial production of lipase and biosurfactant by bacteria, which were respectively increased 3.46- and 1.89-fold. The duration of maximum production became 24 h shorter, so it was fast and cost-saving. In conclusion, lipase and biosurfactant production by O. intermedium strain MZV101 in an identical culture medium at pH 10.5-11 and 50-60 °C, with 1 g/L of molasses, seemed to be economical, fast, and effective for the enhancement of yield percentage for use in detergent applications.

  14. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  15. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    NASA Astrophysics Data System (ADS)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  16. The Complete Redistribution Approximation in Optically Thick Line-Driven Winds

    NASA Astrophysics Data System (ADS)

    Gayley, K. G.; Onifer, A. J.

    2001-05-01

    Wolf-Rayet winds are thought to exhibit large momentum fluxes, which has in part been explained by ionization stratification in the wind. However, it the cause of high mass loss, not high momentum flux, that remains largely a mystery, because standard models fail to achieve sufficient acceleration near the surface where the mass-loss rate is set. We consider a radiative transfer approximation that allows for the dynamics of optically thick Wolf-Rayet winds to be modeled without detailed treatment of the radiation field, called the complete redistribution approximation. In it, it is assumed that thermalization processes cause the photon frequencies to be completely randomized over the course of propagating through the wind, which allows the radiation field to be treated statistically rather than in detail. Thus the approach is similar to the statistical treatment of the line list used in the celebrated CAK approach. The results differ from the effectively gray treatment in that the radiation field is influenced by the line distribution, and the role of gaps in the line distribution is enhanced. The ramifications for the driving of large mass-loss rates is explored.

  17. A study of 2-20 KeV X-rays from the Cygnus region

    NASA Technical Reports Server (NTRS)

    Bleach, R. D.

    1972-01-01

    Two rocket-borne proportional counters, each with 650 sq c, met area and 1.8 x 7.1 deg FWHM rectangular mechanical collimation, surveyed the Cygnus region in the 2 to 20 keV energy range on two occasions. X-ray spectral data gathered on 21 September 1970 from discrete sources in Cygnus are presented. The data from Cyg X-1, Cyg X-2, and Cyg X-3 have sufficient statistical significance to indicate mutually exclusive spectral forms for the three. Upper limits are presented for X-ray intensities above 2 keV for Cyg X-4 and Cyg X-5 (Cygnus loop). A search was made on 9 August 1971 for a diffuse component of X-rays 1.5 keV associated with an interarm region of the galaxy at galactic longitudes in the vicinity of 60 degrees. A statistically significant excess associated with a narrow disk component was detected. Several possible emission models are discussed, with the most likely candidate being a population of unresolvable low luminosity discrete sources.

  18. Salicylate-induced changes in spontaneous activity of single units in the inferior colliculus of the guinea pig.

    PubMed

    Jastreboff, P J; Sasaki, C T

    1986-11-01

    Changes in spontaneous neuronal activity of the inferior colliculus in albino guinea pigs before and after administration of sodium salicylate were analyzed. Animals were anesthetized with pentobarbital, and two microelectrodes separated by a few hundred microns were driven through the inferior colliculus. After collecting a sufficiently large sample of cells, sodium salicylate (450 mg/kg) was injected i.p. and recordings again made 2 h after the injection. Comparison of spontaneous activity recorded before and after salicylate administration revealed highly statistically significant differences (p less than 0.001). After salicylate, the mean rate of the cell population increased from 29 to 83 Hz and the median from 26 to 74 Hz. Control experiments in which sodium salicylate was replaced by saline injection revealed no statistically significant differences in cell discharges. Recordings made during the same experiments from lobulus V of the cerebellar vermis revealed no changes in response to salicylate. The observed changes in single-unit activity due to salicylate administration may represent the first systematic evidence of a tinnituslike phenomenon in animals.

  19. Quality Assurance in Clinical Chemistry: A Touch of Statistics and A Lot of Common Sense

    PubMed Central

    2016-01-01

    Summary Working in laboratories of clinical chemistry, we risk feeling that our personal contribution to quality is small and that statistical models and manufacturers play the major roles. It is seldom sufficiently acknowledged that personal knowledge, skills and common sense are crucial for quality assurance in the interest of patients. The employees, environment and procedures inherent to the laboratory including its interactions with the clients are crucial for the overall result of the total testing chain. As the measurement systems, reagents and procedures are gradually improved, work on the preanalytical, postanalytical and clinical phases is likely to pay the most substantial dividends in accomplishing further quality improvements. This means changing attitudes and behaviour, especially of the users of the laboratory. It requires understanding people and how to engage them in joint improvement processes. We need to use our knowledge and common sense expanded with new skills e.g. from the humanities, management, business and change sciences in order to bring this about together with the users of the laboratory. PMID:28356868

  20. Correlations between human mobility and social interaction reveal general activity patterns.

    PubMed

    Mollgaard, Anders; Lehmann, Sune; Mathiesen, Joachim

    2017-01-01

    A day in the life of a person involves a broad range of activities which are common across many people. Going beyond diurnal cycles, a central question is: to what extent do individuals act according to patterns shared across an entire population? Here we investigate the interplay between different activity types, namely communication, motion, and physical proximity by analyzing data collected from smartphones distributed among 638 individuals. We explore two central questions: Which underlying principles govern the formation of the activity patterns? Are the patterns specific to each individual or shared across the entire population? We find that statistics of the entire population allows us to successfully predict 71% of the activity and 85% of the inactivity involved in communication, mobility, and physical proximity. Surprisingly, individual level statistics only result in marginally better predictions, indicating that a majority of activity patterns are shared across our sample population. Finally, we predict short-term activity patterns using a generalized linear model, which suggests that a simple linear description might be sufficient to explain a wide range of actions, whether they be of social or of physical character.

  1. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0.

    PubMed

    The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.

  2. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  3. Size Matters: What Are the Characteristic Source Areas for Urban Planning Strategies?

    PubMed Central

    Fan, Chao; Myint, Soe W.; Wang, Chenghao

    2016-01-01

    Urban environmental measurements and observational statistics should reflect the properties generated over an adjacent area of adequate length where homogeneity is usually assumed. The determination of this characteristic source area that gives sufficient representation of the horizontal coverage of a sensing instrument or the fetch of transported quantities is of critical importance to guide the design and implementation of urban landscape planning strategies. In this study, we aim to unify two different methods for estimating source areas, viz. the statistical correlation method commonly used by geographers for landscape fragmentation and the mechanistic footprint model by meteorologists for atmospheric measurements. Good agreement was found in the intercomparison of the estimate of source areas by the two methods, based on 2-m air temperature measurement collected using a network of weather stations. The results can be extended to shed new lights on urban planning strategies, such as the use of urban vegetation for heat mitigation. In general, a sizable patch of landscape is required in order to play an effective role in regulating the local environment, proportional to the height at which stakeholders’ interest is mainly concerned. PMID:27832111

  4. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  5. Assessing causal mechanistic interactions: a peril ratio index of synergy based on multiplicativity.

    PubMed

    Lee, Wen-Chung

    2013-01-01

    The assessments of interactions in epidemiology have traditionally been based on risk-ratio, odds-ratio or rate-ratio multiplicativity. However, many epidemiologists fail to recognize that this is mainly for statistical conveniences and often will misinterpret a statistically significant interaction as a genuine mechanistic interaction. The author adopts an alternative metric system for risk, the 'peril'. A peril is an exponentiated cumulative rate, or simply, the inverse of a survival (risk complement) or one plus an odds. The author proposes a new index based on multiplicativity of peril ratios, the 'peril ratio index of synergy based on multiplicativity' (PRISM). Under the assumption of no redundancy, PRISM can be used to assess synergisms in sufficient cause sense, i.e., causal co-actions or causal mechanistic interactions. It has a less stringent threshold to detect a synergy as compared to a previous index of 'relative excess risk due to interaction'. Using the new PRISM criterion, many situations in which there is not evidence of interaction judged by the traditional indices are in fact corresponding to bona fide positive or negative synergisms.

  6. Assessing Causal Mechanistic Interactions: A Peril Ratio Index of Synergy Based on Multiplicativity

    PubMed Central

    Lee, Wen-Chung

    2013-01-01

    The assessments of interactions in epidemiology have traditionally been based on risk-ratio, odds-ratio or rate-ratio multiplicativity. However, many epidemiologists fail to recognize that this is mainly for statistical conveniences and often will misinterpret a statistically significant interaction as a genuine mechanistic interaction. The author adopts an alternative metric system for risk, the ‘peril’. A peril is an exponentiated cumulative rate, or simply, the inverse of a survival (risk complement) or one plus an odds. The author proposes a new index based on multiplicativity of peril ratios, the ‘peril ratio index of synergy based on multiplicativity’ (PRISM). Under the assumption of no redundancy, PRISM can be used to assess synergisms in sufficient cause sense, i.e., causal co-actions or causal mechanistic interactions. It has a less stringent threshold to detect a synergy as compared to a previous index of ‘relative excess risk due to interaction’. Using the new PRISM criterion, many situations in which there is not evidence of interaction judged by the traditional indices are in fact corresponding to bona fide positive or negative synergisms. PMID:23826299

  7. Child-Computer Interaction at the Beginner Stage of Music Learning: Effects of Reflexive Interaction on Children's Musical Improvisation.

    PubMed

    Addessi, Anna Rita; Anelli, Filomena; Benghi, Diber; Friberg, Anders

    2017-01-01

    In this article children's musical improvisation is investigated through the "reflexive interaction" paradigm. We used a particular system, the MIROR-Impro, implemented in the framework of the MIROR project (EC-FP7), which is able to reply to the child playing a keyboard by a "reflexive" output, mirroring (with repetitions and variations) her/his inputs. The study was conducted in a public primary school, with 47 children, aged 6-7. The experimental design used the convergence procedure, based on three sample groups allowing us to verify if the reflexive interaction using the MIROR-Impro is necessary and/or sufficient to improve the children's abilities to improvise. The following conditions were used as independent variables: to play only the keyboard, the keyboard with the MIROR-Impro but with not-reflexive reply, the keyboard with the MIROR-Impro with reflexive reply. As dependent variables we estimated the children's ability to improvise in solos, and in duets. Each child carried out a training program consisting of 5 weekly individual 12 min sessions. The control group played the complete package of independent variables; Experimental Group 1 played the keyboard and the keyboard with the MIROR-Impro with not-reflexive reply; Experimental Group 2 played only the keyboard with the reflexive system. One week after, the children were asked to improvise a musical piece on the keyboard alone (Solo task), and in pairs with a friend (Duet task). Three independent judges assessed the Solo and the Duet tasks by means of a grid based on the TAI-Test for Ability to Improvise rating scale. The EG2, which trained only with the reflexive system, reached the highest average results and the difference with EG1, which did not used the reflexive system, is statistically significant when the children improvise in a duet. The results indicate that in the sample of participants the reflexive interaction alone could be sufficient to increase the improvisational skills, and necessary when they improvise in duets. However, these results are in general not statistically significant. The correlation between Reflexive Interaction and the ability to improvise is statistically significant. The results are discussed on the light of the recent literature in neuroscience and music education.

  8. Circulating Vitamin D and Colorectal Cancer Risk: An International Pooling Project of 17 Cohorts.

    PubMed

    McCullough, Marjorie L; Zoltick, Emilie S; Weinstein, Stephanie J; Fedirko, Veronika; Wang, Molin; Cook, Nancy R; Eliassen, A Heather; Zeleniuch-Jacquotte, Anne; Agnoli, Claudia; Albanes, Demetrius; Barnett, Matthew J; Buring, Julie E; Campbell, Peter T; Clendenen, Tess V; Freedman, Neal D; Gapstur, Susan M; Giovannucci, Edward L; Goodman, Gary G; Haiman, Christopher A; Ho, Gloria Y F; Horst, Ronald L; Hou, Tao; Huang, Wen-Yi; Jenab, Mazda; Jones, Michael E; Joshu, Corinne E; Krogh, Vittorio; Lee, I-Min; Lee, Jung Eun; Männistö, Satu; Le Marchand, Loic; Mondul, Alison M; Neuhouser, Marian L; Platz, Elizabeth A; Purdue, Mark P; Riboli, Elio; Robsahm, Trude Eid; Rohan, Thomas E; Sasazuki, Shizuka; Schoemaker, Minouk J; Sieri, Sabina; Stampfer, Meir J; Swerdlow, Anthony J; Thomson, Cynthia A; Tretli, Steinar; Tsugane, Schoichiro; Ursin, Giske; Visvanathan, Kala; White, Kami K; Wu, Kana; Yaun, Shiaw-Shyuan; Zhang, Xuehong; Willett, Walter C; Gail, Mitchel H; Ziegler, Regina G; Smith-Warner, Stephanie A

    2018-06-14

    Experimental and epidemiological studies suggest a protective role for vitamin D in colorectal carcinogenesis, but evidence is inconclusive. Circulating 25-hydroxyvitamin D (25(OH)D) concentrations that minimize risk are unknown. Current Institute of Medicine (IOM) vitamin D guidance is based solely on bone health. We pooled participant-level data from 17 cohorts, comprising 5706 colorectal cancer case participants and 7107 control participants with a wide range of circulating 25(OH)D concentrations. For 30.1% of participants, 25(OH)D was newly measured. Previously measured 25(OH)D was calibrated to the same assay to permit estimating risk by absolute concentrations. Study-specific relative risks (RRs) for prediagnostic season-standardized 25(OH)D concentrations were calculated using conditional logistic regression and pooled using random effects models. Compared with the lower range of sufficiency for bone health (50-<62.5 nmol/L), deficient 25(OH)D (<30 nmol/L) was associated with 31% higher colorectal cancer risk (RR = 1.31, 95% confidence interval [CI] = 1.05 to 1.62); 25(OH)D above sufficiency (75-<87.5 and 87.5-<100 nmol/L) was associated with 19% (RR = 0.81, 95% CI = 0.67 to 0.99) and 27% (RR = 0.73, 95% CI = 0.59 to 0.91) lower risk, respectively. At 25(OH)D of 100 nmol/L or greater, risk did not continue to decline and was not statistically significantly reduced (RR = 0.91, 95% CI = 0.67 to 1.24, 3.5% of control participants). Associations were minimally affected when adjusting for body mass index, physical activity, or other risk factors. For each 25 nmol/L increment in circulating 25(OH)D, colorectal cancer risk was 19% lower in women (RR = 0.81, 95% CI = 0.75 to 0.87) and 7% lower in men (RR = 0.93, 95% CI = 0.86 to 1.00) (two-sided Pheterogeneity by sex = .008). Associations were inverse in all subgroups, including colorectal subsite, geographic region, and season of blood collection. Higher circulating 25(OH)D was related to a statistically significant, substantially lower colorectal cancer risk in women and non-statistically significant lower risk in men. Optimal 25(OH)D concentrations for colorectal cancer risk reduction, 75-100 nmol/L, appear higher than current IOM recommendations.

  9. The Scientific Method, Diagnostic Bayes, and How to Detect Epistemic Errors

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2015-12-01

    In the past decades, Bayesian methods have found widespread application and use in environmental systems modeling. Bayes theorem states that the posterior probability, P(H|D) of a hypothesis, H is proportional to the product of the prior probability, P(H) of this hypothesis and the likelihood, L(H|hat{D}) of the same hypothesis given the new/incoming observations, \\hat {D}. In science and engineering, H often constitutes some numerical simulation model, D = F(x,.) which summarizes using algebraic, empirical, and differential equations, state variables and fluxes, all our theoretical and/or practical knowledge of the system of interest, and x are the d unknown parameters which are subject to inference using some data, \\hat {D} of the observed system response. The Bayesian approach is intimately related to the scientific method and uses an iterative cycle of hypothesis formulation (model), experimentation and data collection, and theory/hypothesis refinement to elucidate the rules that govern the natural world. Unfortunately, model refinement has proven to be very difficult in large part because of the poor diagnostic power of residual based likelihood functions tep{gupta2008}. This has inspired te{vrugt2013} to advocate the use of 'likelihood-free' inference using approximate Bayesian computation (ABC). This approach uses one or more summary statistics, S(\\hat {D}) of the original data, \\hat {D} designed ideally to be sensitive only to one particular process in the model. Any mismatch between the observed and simulated summary metrics is then easily linked to a specific model component. A recurrent issue with the application of ABC is self-sufficiency of the summary statistics. In theory, S(.) should contain as much information as the original data itself, yet complex systems rarely admit sufficient statistics. In this article, we propose to combine the ideas of ABC and regular Bayesian inference to guarantee that no information is lost in diagnostic model evaluation. This hybrid approach, coined diagnostic Bayes, uses the summary metrics as prior distribution and original data in the likelihood function, or P(x|\\hat {D}) ∝ P(x|S(\\hat {D})) L(x|\\hat {D}). A case study illustrates the ability of the proposed methodology to diagnose epistemic errors and provide guidance on model refinement.

  10. Child–Computer Interaction at the Beginner Stage of Music Learning: Effects of Reflexive Interaction on Children’s Musical Improvisation

    PubMed Central

    Addessi, Anna Rita; Anelli, Filomena; Benghi, Diber; Friberg, Anders

    2017-01-01

    In this article children’s musical improvisation is investigated through the “reflexive interaction” paradigm. We used a particular system, the MIROR-Impro, implemented in the framework of the MIROR project (EC-FP7), which is able to reply to the child playing a keyboard by a “reflexive” output, mirroring (with repetitions and variations) her/his inputs. The study was conducted in a public primary school, with 47 children, aged 6–7. The experimental design used the convergence procedure, based on three sample groups allowing us to verify if the reflexive interaction using the MIROR-Impro is necessary and/or sufficient to improve the children’s abilities to improvise. The following conditions were used as independent variables: to play only the keyboard, the keyboard with the MIROR-Impro but with not-reflexive reply, the keyboard with the MIROR-Impro with reflexive reply. As dependent variables we estimated the children’s ability to improvise in solos, and in duets. Each child carried out a training program consisting of 5 weekly individual 12 min sessions. The control group played the complete package of independent variables; Experimental Group 1 played the keyboard and the keyboard with the MIROR-Impro with not-reflexive reply; Experimental Group 2 played only the keyboard with the reflexive system. One week after, the children were asked to improvise a musical piece on the keyboard alone (Solo task), and in pairs with a friend (Duet task). Three independent judges assessed the Solo and the Duet tasks by means of a grid based on the TAI-Test for Ability to Improvise rating scale. The EG2, which trained only with the reflexive system, reached the highest average results and the difference with EG1, which did not used the reflexive system, is statistically significant when the children improvise in a duet. The results indicate that in the sample of participants the reflexive interaction alone could be sufficient to increase the improvisational skills, and necessary when they improvise in duets. However, these results are in general not statistically significant. The correlation between Reflexive Interaction and the ability to improvise is statistically significant. The results are discussed on the light of the recent literature in neuroscience and music education. PMID:28184205

  11. Long Working Hours and Subsequent Use of Psychotropic Medicine: A Study Protocol

    PubMed Central

    Albertsen, Karen

    2014-01-01

    Background Mental ill health is the most frequent cause of long-term sickness absence and disability retirement in Denmark. Some instances of mental ill health might be due to long working hours. A recent large cross-sectional study of a general working population in Norway found that not only “very much overtime”, but also “moderate overtime” (41-48 work hours/week) was significantly associated with increased levels of both anxiety and depression. These findings have not been sufficiently confirmed in longitudinal studies. Objective The objective of the study is to give a detailed plan for a research project aimed at investigating the possibility of a prospective association between weekly working hours and use of psychotropic medicine in the general working population of Denmark. Methods People from the general working population of Denmark have been surveyed, at various occasions in the time period 1995-2010, and interviewed about their work environment. The present study will link interview data from these surveys to national registers covering all inhabitants of Denmark. The participants will be followed for the first occurrence of redeemed prescriptions for psychotropic medicine. Poisson regression will be used to analyze incidence rates as a function of weekly working hours (32-40; 41-48; > 48 hours/week). The analyses will be controlled for gender, age, sample, shift work, and socioeconomic status. According to our feasibility studies, the statistical power is sufficient and the exposure is stable enough to make the study worth the while. Results The publication of the present study protocol ends the design phase of the project. In the next phase, the questionnaire data will be forwarded to Statistics Denmark where they will be linked to data on deaths, migrations, socioeconomic status, and redeemed prescriptions for psychotropic medication. We expect the analysis to be completed by the end of 2014 and the results to be published mid 2015. Conclusions The proposed project will be free from hindsight bias, since all hypotheses and statistical models are completely defined, peer-reviewed, and published before we link the exposure data to the outcome data. The results of the project will indicate to what extent and in what direction the national burden of mental ill health in Denmark has been influenced by long working hours. PMID:25239125

  12. Long working hours and subsequent use of psychotropic medicine: a study protocol.

    PubMed

    Hannerz, Harald; Albertsen, Karen

    2014-09-19

    Mental ill health is the most frequent cause of long-term sickness absence and disability retirement in Denmark. Some instances of mental ill health might be due to long working hours. A recent large cross-sectional study of a general working population in Norway found that not only "very much overtime", but also "moderate overtime" (41-48 work hours/week) was significantly associated with increased levels of both anxiety and depression. These findings have not been sufficiently confirmed in longitudinal studies. The objective of the study is to give a detailed plan for a research project aimed at investigating the possibility of a prospective association between weekly working hours and use of psychotropic medicine in the general working population of Denmark. People from the general working population of Denmark have been surveyed, at various occasions in the time period 1995-2010, and interviewed about their work environment. The present study will link interview data from these surveys to national registers covering all inhabitants of Denmark. The participants will be followed for the first occurrence of redeemed prescriptions for psychotropic medicine. Poisson regression will be used to analyze incidence rates as a function of weekly working hours (32-40; 41-48; > 48 hours/week). The analyses will be controlled for gender, age, sample, shift work, and socioeconomic status. According to our feasibility studies, the statistical power is sufficient and the exposure is stable enough to make the study worth the while. The publication of the present study protocol ends the design phase of the project. In the next phase, the questionnaire data will be forwarded to Statistics Denmark where they will be linked to data on deaths, migrations, socioeconomic status, and redeemed prescriptions for psychotropic medication. We expect the analysis to be completed by the end of 2014 and the results to be published mid 2015. The proposed project will be free from hindsight bias, since all hypotheses and statistical models are completely defined, peer-reviewed, and published before we link the exposure data to the outcome data. The results of the project will indicate to what extent and in what direction the national burden of mental ill health in Denmark has been influenced by long working hours.

  13. Evaluation of adding item-response theory analysis for evaluation of the European Board of Ophthalmology Diploma examination.

    PubMed

    Mathysen, Danny G P; Aclimandos, Wagih; Roelant, Ella; Wouters, Kristien; Creuzot-Garcher, Catherine; Ringens, Peter J; Hawlina, Marko; Tassignon, Marie-José

    2013-11-01

    To investigate whether introduction of item-response theory (IRT) analysis, in parallel to the 'traditional' statistical analysis methods available for performance evaluation of multiple T/F items as used in the European Board of Ophthalmology Diploma (EBOD) examination, has proved beneficial, and secondly, to study whether the overall assessment performance of the current written part of EBOD is sufficiently high (KR-20≥ 0.90) to be kept as examination format in future EBOD editions. 'Traditional' analysis methods for individual MCQ item performance comprise P-statistics, Rit-statistics and item discrimination, while overall reliability is evaluated through KR-20 for multiple T/F items. The additional set of statistical analysis methods for the evaluation of EBOD comprises mainly IRT analysis. These analysis techniques are used to monitor whether the introduction of negative marking for incorrect answers (since EBOD 2010) has a positive influence on the statistical performance of EBOD as a whole and its individual test items in particular. Item-response theory analysis demonstrated that item performance parameters should not be evaluated individually, but should be related to one another. Before the introduction of negative marking, the overall EBOD reliability (KR-20) was good though with room for improvement (EBOD 2008: 0.81; EBOD 2009: 0.78). After the introduction of negative marking, the overall reliability of EBOD improved significantly (EBOD 2010: 0.92; EBOD 2011:0.91; EBOD 2012: 0.91). Although many statistical performance parameters are available to evaluate individual items, our study demonstrates that the overall reliability assessment remains the only crucial parameter to be evaluated allowing comparison. While individual item performance analysis is worthwhile to undertake as secondary analysis, drawing final conclusions seems to be more difficult. Performance parameters need to be related, as shown by IRT analysis. Therefore, IRT analysis has proved beneficial for the statistical analysis of EBOD. Introduction of negative marking has led to a significant increase in the reliability (KR-20 > 0.90), indicating that the current examination format can be kept for future EBOD examinations. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  14. Rasch analysis on OSCE data : An illustrative example.

    PubMed

    Tor, E; Steketee, C

    2011-01-01

    The Objective Structured Clinical Examination (OSCE) is a widely used tool for the assessment of clinical competence in health professional education. The goal of the OSCE is to make reproducible decisions on pass/fail status as well as students' levels of clinical competence according to their demonstrated abilities based on the scores. This paper explores the use of the polytomous Rasch model in evaluating the psychometric properties of OSCE scores through a case study. The authors analysed an OSCE data set (comprised of 11 stations) for 80 fourth year medical students based on the polytomous Rasch model in an effort to answer two research questions: (1) Do the clinical tasks assessed in the 11 OSCE stations map on to a common underlying construct, namely clinical competence? (2) What other insights can Rasch analysis offer in terms of scaling, item analysis and instrument validation over and above the conventional analysis based on classical test theory? The OSCE data set has demonstrated a sufficient degree of fit to the Rasch model (Χ(2) = 17.060, DF=22, p=0.76) indicating that the 11 OSCE station scores have sufficient psychometric properties to form a measure for a common underlying construct, i.e. clinical competence. Individual OSCE station scores with good fit to the Rasch model (p > 0.1 for all Χ(2) statistics) further corroborated the characteristic of unidimensionality of the OSCE scale for clinical competence. A Person Separation Index (PSI) of 0.704 indicates sufficient level of reliability for the OSCE scores. Other useful findings from the Rasch analysis that provide insights, over and above the analysis based on classical test theory, are also exemplified using the data set. The polytomous Rasch model provides a useful and supplementary approach to the calibration and analysis of OSCE examination data.

  15. Report on cancer risks associated with the ingestion of asbestos. DHHS Committee to Coordinate Environmental and Related Programs.

    PubMed Central

    1987-01-01

    This report is an assessment of all available literature that pertains to the potential risk of cancer associated with ingestion of asbestos. It was compiled by a working group to assist policy makers in the Department of Health and Human Services determine if adequate information was available for a definitive risk assessment on this potential problem and evaluate if the weight of evidence was sufficient to prioritize this issue for new policy recommendations. The work group considered the basis for concern over this problem, the body of toxicology experiments, the individual epidemiologic studies which have attempted to investigate this issue, and the articles that discuss components of risk assessment pertaining to the ingestion of asbestos. In the report, the work group concluded: that no direct, definitive risk assessment can be conducted at this time; that further epidemiologic investigations will be very costly and only possess sufficient statistical power to detect relatively large excesses in cancers related to asbestos ingestion; and that probably the most pertinent toxicologic experiments relate to resolving the differences in how inhaled asbestos, which is eventually swallowed, is biologically processed by humans, compared to how ingested asbestos is processed. The work group believes that the cancer risk associated with asbestos ingestion should not be perceived as one of the most pressing potential public health hazards facing the nation. However, the work group does not believe that information was sufficient to assess the level of cancer risk associated with the ingestion and therefore, this potential hazard should not be discounted, and ingestion exposure to asbestos should be eliminated whenever possible. PMID:3304998

  16. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  17. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks.

    PubMed

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-04-26

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.

  18. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  19. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    NASA Technical Reports Server (NTRS)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  20. Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?

    NASA Astrophysics Data System (ADS)

    Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui

    2016-11-01

    Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.

  1. Peripheral vascular effects on auscultatory blood pressure measurement.

    PubMed

    Rabbany, S Y; Drzewiecki, G M; Noordergraaf, A

    1993-01-01

    Experiments were conducted to examine the accuracy of the conventional auscultatory method of blood pressure measurement. The influence of the physiologic state of the vascular system in the forearm distal to the site of Korotkoff sound recording and its impact on the precision of the measured blood pressure is discussed. The peripheral resistance in the arm distal to the cuff was changed noninvasively by heating and cooling effects and by induction of reactive hyperemia. All interventions were preceded by an investigation of their effect on central blood pressure to distinguish local effects from changes in central blood pressure. These interventions were sufficiently moderate to make their effect on central blood pressure, recorded in the other arm, statistically insignificant (i.e., changes in systolic [p < 0.3] and diastolic [p < 0.02]). Nevertheless, such alterations were found to modify the amplitude of the Korotkoff sound, which can manifest itself as an apparent change in arterial blood pressure that is readily discerned by the human ear. The increase in diastolic pressure for the cooling experiments was statistically significant (p < 0.001). Moreover, both measured systolic (p < 0.004) and diastolic (p < 0.001) pressure decreases during the reactive hyperemia experiments were statistically significant. The findings demonstrate that alteration in vascular state generates perplexing changes in blood pressure, hence confirming experimental observations by earlier investigators as well as predictions by our model studies.

  2. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks

    PubMed Central

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-01-01

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668

  3. Iron status and self-perceived health, well-being, and fatigue in female university students living in New Zealand.

    PubMed

    Beck, Kathryn L; Conlon, Cathryn A; Kruger, Rozanne; Heath, Anne-Louise M; Matthys, Christophe; Coad, Jane; Stonehouse, Welma

    2012-02-01

    To determine the relationship between iron depletion and self-perceived health, well-being, and fatigue in a female university student population living in New Zealand. A total of 233 women aged 18-44 years studying at Massey University, Auckland, were included in this cross-sectional analysis. Serum ferritin (SF), hemoglobin (Hb), and C-reactive protein (CRP) were analyzed from a venipuncture blood sample. Participants completed the SF-36v2 General Health Survey (SF-36) and the Multidimensional Fatigue Symptom Inventory-Short Form (MFSI-SF) questionnaire, and anthropometric measurements (height and weight) and data on demographics, lifestyle, and medical history were obtained. Characteristics of iron-sufficient (SF ≥ 20 μg/L, Hb ≥ 120 g/L) and iron-depleted (SF < 20 μg/L, Hb ≥ 120 g/L) participants were compared, and multiple regression analyses were carried out to determine predictors of health, well-being, and fatigue using a p value of <0.01 to indicate statistical significance because multiple comparisons were being made. There were no significant differences in self-perceived health and well-being determined using the SF-36 questionnaire between women who were iron sufficient and women who were iron depleted. Although MFSI-SF physical fatigue was significantly lower in those with iron depletion (p = 0.008), it was not predicted by current iron status in a multivariate model controlling for factors expected to be associated with iron status and fatigue (p = 0.037). However, smoking, a history of suboptimal iron status, and having a current medical condition were significant (negative) predictors of MFSI-SF physical fatigue, explaining 22.5% of the variance (p < 0.001). There were no significant differences in the other measures of fatigue determined using the MFSI-SF between women who were iron sufficient and those who were iron depleted. Women with iron depletion did not differ significantly from women who were iron sufficient with regard to self-perceived health, well-being, or fatigue. Future studies investigating fatigue should control for previous diagnosis of suboptimal iron status, smoking, and presence of a medical condition.

  4. Different doses of supplemental vitamin D maintain interleukin-5 without altering skeletal muscle strength: a randomized, double-blind, placebo-controlled study in vitamin D sufficient adults

    PubMed Central

    2012-01-01

    Background Supplemental vitamin D modulates inflammatory cytokines and skeletal muscle function, but results are inconsistent. It is unknown if these inconsistencies are dependent on the supplemental dose of vitamin D. Therefore, the purpose of this study was to identify the influence of different doses of supplemental vitamin D on inflammatory cytokines and muscular strength in young adults. Methods Men (n = 15) and women (n = 15) received a daily placebo or vitamin D supplement (200 or 4000 IU) for 28-d during the winter. Serum 25-hydroxyvitamin D (25(OH)D), cytokine concentrations and muscular (leg) strength measurements were performed prior to and during supplementation. Statistical significance of data were assessed with a two-way (time, treatment) analysis of variance (ANOVA) with repeated measures, followed by a Tukey's Honestly Significant Difference to test multiple pairwise comparisons. Results Upon enrollment, 63% of the subjects were vitamin D sufficient (serum 25(OH)D ≥ 30 ng/ml). Serum 25(OH)D and interleukin (IL)-5 decreased (P < 0.05) across time in the placebo group. Supplemental vitamin D at 200 IU maintained serum 25(OH)D concentrations and increased IL-5 (P < 0.05). Supplemental vitamin D at 4000 IU increased (P < 0.05) serum 25(OH)D without altering IL-5 concentrations. Although serum 25(OH)D concentrations correlated (P < 0.05) with muscle strength, muscle strength was not changed by supplemental vitamin D. Conclusion In young adults who were vitamin D sufficient prior to supplementation, we conclude that a low-daily dose of supplemental vitamin D prevents serum 25(OH)D and IL-5 concentration decreases, and that muscular strength does not parallel the 25(OH)D increase induced by a high-daily dose of supplemental vitamin D. Considering that IL-5 protects against viruses and bacterial infections, these findings could have a broad physiological importance regarding the ability of vitamin D sufficiency to mediate the immune systems protection against infection. PMID:22405472

  5. Evaluation of oral hygiene products: science is true; don't be misled by the facts.

    PubMed

    Addy, M; Moran, J M

    1997-10-01

    Most people in industrialized countries use oral hygiene products. When an oral health benefit is expected, it is important that sufficient scientific evidence exist to support such claims. Ideally, data should be cumulative derived from studies in vitro and in vivo. The data should be available to the profession for evaluation by publication in refereed scientific journals. Terms and phrases require clarification, and claims made by implication or derived by inference must be avoided. Similarity in products is not necessarily proof per se of efficacy. Studies in vitro and in vivo should follow the basic principles of scientific research. Studies must be ethical, avoid bias and be suitably controlled. A choice of controls will vary depending on whether an agent or a whole product is evaluated and the development stage of a formulation. Where appropriate, new products should be compared with products already available and used by the general public. Conformity with the guidelines for good clinical practice appears to be a useful way of validating studies and a valuable guide to the profession. Studies should be designed with sufficient power to detect statistically significant differences if these exist. However, consideration must be given to the clinical significance of statistically significant differences between formulations since these are not necessarily the same. Studies in vitro provide supportive data but extrapolation to clinical effect is difficult and even misleading, and such data should not stand alone as proof of efficacy of a product. Short-term studies in vivo provide useful information, particularly at the development stage. Ideally, however, products should be proved effective when used in the circumstances for which they are developed. Nevertheless, a variety of variable influence the outcome of home-use studies, and the influence of the variable cannot usually be calculated. Although rarely considered, the cost-benefit ratio of some oral hygiene products needs to be considered.

  6. SU-F-J-197: A Novel Intra-Beam Range Detection and Adaptation Strategy for Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Shao, Y

    2016-06-15

    Purpose: In-vivo range detection/verification is crucial in particle therapy for effective and safe delivery. The state-of-art techniques are not sufficient for in-vivo on-line range verification due to conflicts among patient dose, signal statistics and imaging time. We propose a novel intra-beam range detection and adaptation strategy for particle therapy. Methods: This strategy uses the planned mid-range spots as probing beams without adding extra radiation to patients. Such choice of probing beams ensures the Bragg peaks to remain inside the tumor even with significant range variation from the plan. It offers sufficient signal statistics for in-beam positron emission tomography (PET) duemore » to high positron activity of therapeutic dose. The probing beam signal can be acquired and reconstructed using in-beam PET that allows for delineation of the Bragg peaks and detection of range shift with ease of detection enabled by single-layered spots. If the detected range shift is within a pre-defined tolerance, the remaining spots will be delivered as the original plan. Otherwise, a fast re-optimization using range-shifted beamlets and accounting for the probing beam dose is applied to consider the tradeoffs posed by the online anatomy. Simulated planning and delivery studies were used to demonstrate the effectiveness of the proposed techniques. Results: Simulations with online range variations due to shifts of various foreign objects into the beam path showed successful delineation of the Bragg peaks as a result of delivering probing beams. Without on-line delivery adaptation, dose distribution was significantly distorted. In contrast, delivery adaptation incorporating detected range shift recovered well the planned dose. Conclusion: The proposed intra-beam range detection and adaptation utilizing the planned mid-range spots as probing beams, which illuminate the beam range with strong and accurate PET signals, is a safe, practical, yet effective approach to address range uncertainty issues in particle therapy.« less

  7. A cross-sectional study of vitamin D levels in a large cohort of patients with rheumatic diseases.

    PubMed

    Nikiphorou, Elena; Uksila, Jaakko; Sokka, Tuulikki

    2018-03-01

    The objective of this study is to examine 25-hydroxyvitamin D [25(OH)D] (D-25) levels and associations with patient- and disease-related factors in rheumatic diseases. This is a register-based study of D-25 levels in adult patients seen at the Central Finland Hospital rheumatology clinic (January 2011-April 2015). Demographic, clinical, laboratory, and patient-reported outcomes (PROs) were collected as part of the normal infrastructure of the outpatient clinic and examined for their association with D-25 level. Statistical analysis included descriptive statistics and univariable and multivariable regression analyses adjusting for age and gender. D-25 was measured in 3203 patients (age range 15-91 years, mean 54; 68% female) with diagnoses including RA (n = 1386), unspecified arthralgia/myalgia (n = 413), and connective tissues diseases (n = 213). The overall D-25 mean (SD) level was 78 (31) and median (IQR) 75 (55, 97). At baseline, 17.8% had D-25 deficiency, and only 1.6% severe deficiency  (< 25 nmol/l); 34%/49% had sufficient/optimal D-25 levels. Higher D-25 levels were associated with older age, lower BMI, and regular exercise (all p < 0.001) among other factors. In multivariable analyses, younger age, non-white background, higher BMI, smoking, less frequent exercise (p < 0.001), and first visit to the clinic (p = 0.033) remained significantly associated with D-25 deficiency. Among those with sub-optimal D-25 levels, 64% had improved to sufficient/optimal levels after a median (IQR) of 13 (7.8, 22) months. The proportion of patients with D-25 deficiency in this study was generally low. Older patients had considerably higher D-25 levels compared to younger patients. Lower physical exercise and higher BMI were associated with higher risk of deficiency. The study supports the benefit of strategies to help minimize the risk of D-25 deficiency.

  8. The wave-tide-river delta classification revisited: Introducing the effects of Humans on delta equilibriu

    NASA Astrophysics Data System (ADS)

    Besset, M.; Anthony, E.; Sabatier, F.

    2016-12-01

    The influence of physical processes on river deltas has long been identified, mainly on the basis of delta morphology. A cuspate delta is considered as wave-dominated, a delta with finger-like extensions is characterized as river-dominated, and a delta with estuarine re-entrants is considered tide-dominated (Galloway, 1975). The need for a more quantitative classification is increasingly recognized, and is achievable through quantified combinations, a good example being Syvitski and Saito (2007) wherein the joint influence of marine power - wave and tides - is compared to that of river influence. This need is further justified as deltas become more and more vulnerable. Going forward from the Syvitski and Saito (2007) approach, we confront, from a large database on 60 river deltas, the maximum potential power of waves and the tidal range (both representing marine power), and the specific stream power and river sediment supply reflecting an increasingly human-impacted river influence. The results show that 45 deltas (75%) have levels of marine power that are significantly higher than those of specific stream power. Five deltas have sufficient stream power to counterbalance marine power but a present sediment supply inadequate for them to be statistically considered as river-dominated. Six others have a sufficient sediment supply but a specific stream power that is not high enough for them to be statistically river-dominated. A major manifestation of the interplay of these parameters is accelerated delta erosion worldwide, shifting the balance towards marine power domination. Deltas currently eroding are mainly influenced by marine power (93%), and small deltas (< 300 km2 of deltaic protuberance) are the most vulnerable (82%). These high levels of erosion domination, compounded by accelerated subsidence, are related to human-induced sediment supply depletion and changes in water discharge in the face of the sediment-dispersive capacity of waves and currents.

  9. Discrete Element Modeling of the Mobilization of Coarse Gravel Beds by Finer Gravel Particles

    NASA Astrophysics Data System (ADS)

    Hill, K. M.; Tan, D.

    2012-12-01

    Recent research has shown that the addition of fine gravel particles to a coarse bed will mobilize the coarser bed, and that the effect is sufficiently strong that a pulse of fine gravel particles can mobilize an impacted coarser bed. Recent flume experiments have demonstrated that the degree of bed mobilization by finer particles is primarily dependent on the particle size ratio of the coarse and fine particles, rather than absolute size of either particle, provided both particles are sufficiently large. However, the mechanism behind the mobilization is not understood. It has previously been proposed that the mechanism is driven by a combination of geometric effects and hydraulic effects. For example, it has been argued that smaller particles fill in gaps along the bed, resulting in a smoother bed over which the larger particles are less likely to be disentrained and a reduced near-bed flow velocity and subsequent increased drag on protruding particles. Altered near-bed turbulence has also been cited as playing an important role. We perform simulations using the discrete element method with one-way fluid-solid coupling to conduct simulations of mobilization of a gravel bed by fine gravel particles. By independently and artificially controlling average and fluctuating velocity profiles, we systematically investigate the relative role that may be played by particle-particle interactions, average near-bed velocity profiles, and near-bed turbulence statistics. The simulations indicate that the relative importance of these mechanisms changes with the degree of mobilization of the bed. For higher bed mobility similar to bed sheets, particle-particle interactions, plays a significant role in an apparent rheology in the bed sheets, not unlike that observed in a dense granular flow of particles of different sizes. For conditions closer to a critical shear stress for bedload transport, the near-bed velocity profiles and turbulence statistics become increasingly important.

  10. An evaluation of the costs and consequences of Children Community Nursing teams.

    PubMed

    Hinde, Sebastian; Allgar, Victoria; Richardson, Gerry; Spiers, Gemma; Parker, Gillian; Birks, Yvonne

    2017-08-01

    Recent years have seen an increasing shift towards providing care in the community, epitomised by the role of Children's Community Nursing (CCN) teams. However, there have been few attempts to use robust evaluative methods to interrogate the impact of such services. This study sought to evaluate whether reduction in secondary care costs, resulting from the introduction of 2 CCN teams, was sufficient to offset the additional cost of commissioning. Among the potential benefits of the CCN teams is a reduction in the burden placed on secondary care through the delivery of care at home; it is this potential reduction which is evaluated in this study via a 2-part analytical method. Firstly, an interrupted time series analysis used Hospital Episode Statistics data to interrogate any change in total paediatric bed days as a result of the introduction of 2 teams. Secondly, a costing analysis compared the cost savings from any reduction in total bed days with the cost of commissioning the teams. This study used a retrospective longitudinal study design as part of the transforming children's community services trial, which was conducted between June 2012 and June 2015. A reduction in hospital activity after introduction of the 2 nursing teams was found, (9634 and 8969 fewer bed days), but this did not reach statistical significance. The resultant cost saving to the National Health Service was less than the cost of employing the teams. The study represents an important first step in understanding the role of such teams as a means of providing a high quality of paediatric care in an era of limited resource. While the cost saving from released paediatric bed days was not sufficient to demonstrate cost-effectiveness, the analysis does not incorporate wider measures of health care utilisation and nonmonetary benefits resulting from the CCN teams. © 2017 John Wiley & Sons, Ltd.

  11. Quality of reporting of multivariable logistic regression models in Chinese clinical medical journals.

    PubMed

    Zhang, Ying-Ying; Zhou, Xiao-Bin; Wang, Qiu-Zhen; Zhu, Xiao-Yan

    2017-05-01

    Multivariable logistic regression (MLR) has been increasingly used in Chinese clinical medical research during the past few years. However, few evaluations of the quality of the reporting strategies in these studies are available.To evaluate the reporting quality and model accuracy of MLR used in published work, and related advice for authors, readers, reviewers, and editors.A total of 316 articles published in 5 leading Chinese clinical medical journals with high impact factor from January 2010 to July 2015 were selected for evaluation. Articles were evaluated according 12 established criteria for proper use and reporting of MLR models.Among the articles, the highest quality score was 9, the lowest 1, and the median 5 (4-5). A total of 85.1% of the articles scored below 6. No significant differences were found among these journals with respect to quality score (χ = 6.706, P = .15). More than 50% of the articles met the following 5 criteria: complete identification of the statistical software application that was used (97.2%), calculation of the odds ratio and its confidence interval (86.4%), description of sufficient events (>10) per variable, selection of variables, and fitting procedure (78.2%, 69.3%, and 58.5%, respectively). Less than 35% of the articles reported the coding of variables (18.7%). The remaining 5 criteria were not satisfied by a sufficient number of articles: goodness-of-fit (10.1%), interactions (3.8%), checking for outliers (3.2%), collinearity (1.9%), and participation of statisticians and epidemiologists (0.3%). The criterion of conformity with linear gradients was applicable to 186 articles; however, only 7 (3.8%) mentioned or tested it.The reporting quality and model accuracy of MLR in selected articles were not satisfactory. In fact, severe deficiencies were noted. Only 1 article scored 9. We recommend authors, readers, reviewers, and editors to consider MLR models more carefully and cooperate more closely with statisticians and epidemiologists. Journals should develop statistical reporting guidelines concerning MLR.

  12. Topographic relationships for design rainfalls over Australia

    NASA Astrophysics Data System (ADS)

    Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.

    2016-02-01

    Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.

  13. Nest substrate reflects incubation style in extant archosaurs with implications for dinosaur nesting habits.

    PubMed

    Tanaka, Kohei; Zelenitsky, Darla K; Therrien, François; Kobayashi, Yoshitsugu

    2018-03-15

    Dinosaurs thrived and reproduced in various regions worldwide, including the Arctic. In order to understand their nesting in diverse or extreme environments, the relationships between nests, nesting environments, and incubation methods in extant archosaurs were investigated. Statistical analyses reveal that species of extant covered nesters (i.e., crocodylians and megapodes) preferentially select specific sediments/substrates as a function of their nesting style and incubation heat sources. Relationships between dinosaur eggs and the sediments in which they occur reveal that hadrosaurs and some sauropods (i.e., megaloolithid eggs) built organic-rich mound nests that relied on microbial decay for incubation, whereas other sauropods (i.e., faveoloolithid eggs) built sandy in-filled hole nests that relied on solar or potentially geothermal heat for incubation. Paleogeographic distribution of mound nests and sandy in-filled hole nests in dinosaurs reveals these nest types produced sufficient incubation heat to be successful up to mid latitudes (≤47°), 10° higher than covered nesters today. However, only mound nesting and likely brooding could have produced sufficient incubation heat for nesting above the polar circle (>66°). As a result, differences in nesting styles may have placed restrictions on the reproduction of dinosaurs and their dispersal at high latitudes.

  14. Recruitment of Older Adults: Success May Be in the Details.

    PubMed

    McHenry, Judith C; Insel, Kathleen C; Einstein, Gilles O; Vidrine, Amy N; Koerner, Kari M; Morrow, Daniel G

    2015-10-01

    Describe recruitment strategies used in a randomized clinical trial of a behavioral prospective memory intervention to improve medication adherence for older adults taking antihypertensive medication. Recruitment strategies represent 4 themes: accessing an appropriate population, communication and trust-building, providing comfort and security, and expressing gratitude. Recruitment activities resulted in 276 participants with a mean age of 76.32 years, and study enrollment included 207 women, 69 men, and 54 persons representing ethnic minorities. Recruitment success was linked to cultivating relationships with community-based organizations, face-to-face contact with potential study participants, and providing service (e.g., blood pressure checks) as an access point to eligible participants. Seventy-two percent of potential participants who completed a follow-up call and met eligibility criteria were enrolled in the study. The attrition rate was 14.34%. The projected increase in the number of older adults intensifies the need to study interventions that improve health outcomes. The challenge is to recruit sufficient numbers of participants who are also representative of older adults to test these interventions. Failing to recruit a sufficient and representative sample can compromise statistical power and the generalizability of study findings. © The Author 2012. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. A canonical theory of dynamic decision-making.

    PubMed

    Fox, John; Cooper, Richard P; Glasspool, David W

    2013-01-01

    Decision-making behavior is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptualization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering.

  16. Contribution of Glottic Insufficiency to Perceived Breathiness in Classically Trained Singers.

    PubMed

    Graham, Ellen; Angadi, Vrushali; Sloggy, Joanna; Stemple, Joseph

    2016-09-01

    Breathiness in the singing voice is problematic for classical singers. Voice students and singing teachers typically attribute breathiness to breath management issues and breathing technique. The present study sought to determine whether glottic insufficiency may also contribute to breathiness in a singer's voice. Studies have revealed a relationship between insufficient vocal fold closure and inefficiency in the speaking voice. However, the effect of insufficient vocal fold closure on vocal efficiency in singers has yet to be determined. Two groups of voice students identified with and without breathiness issues underwent aerodynamic and acoustic voice assessment as well as laryngeal stroboscopy of the vocal folds to quantify the prevalence of insufficient vocal fold closure, also known as glottic insufficiency. These assessments revealed four groups: 1) those with glottic insufficiency and no perceived voice breathiness; 2) those with glottic sufficiency and perceived voice breathiness; 3) those with glottic insufficiency and perceived breathiness; and 4) those with glottic sufficiency and no perceived breathiness. Results suggest that previously undiscovered glottal insufficiency is common in young singers, particularly women, though the correlation with identified breathiness was not statistically significant. Acoustic and aerodynamic measures including noise-to-harmonics ratio, maximum phonation time, airflow rate, subglottal pressure, and laryngeal airway resistance were most sensitive to glottic insufficiency.

  17. Structured Ordinary Least Squares: A Sufficient Dimension Reduction approach for regressions with partitioned predictors and heterogeneous units.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Li, Bing

    2017-06-01

    In many scientific and engineering fields, advanced experimental and computing technologies are producing data that are not just high dimensional, but also internally structured. For instance, statistical units may have heterogeneous origins from distinct studies or subpopulations, and features may be naturally partitioned based on experimental platforms generating them, or on information available about their roles in a given phenomenon. In a regression analysis, exploiting this known structure in the predictor dimension reduction stage that precedes modeling can be an effective way to integrate diverse data. To pursue this, we propose a novel Sufficient Dimension Reduction (SDR) approach that we call structured Ordinary Least Squares (sOLS). This combines ideas from existing SDR literature to merge reductions performed within groups of samples and/or predictors. In particular, it leads to a version of OLS for grouped predictors that requires far less computation than recently proposed groupwise SDR procedures, and provides an informal yet effective variable selection tool in these settings. We demonstrate the performance of sOLS by simulation and present a first application to genomic data. The R package "sSDR," publicly available on CRAN, includes all procedures necessary to implement the sOLS approach. © 2016, The International Biometric Society.

  18. A Canonical Theory of Dynamic Decision-Making

    PubMed Central

    Fox, John; Cooper, Richard P.; Glasspool, David W.

    2012-01-01

    Decision-making behavior is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptualization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering. PMID:23565100

  19. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations

    PubMed Central

    Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.

    2017-01-01

    A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889

  20. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    PubMed

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.

  1. Alignment of adherence and risk for HIV acquisition in a demonstration project of pre-exposure prophylaxis among HIV serodiscordant couples in Kenya and Uganda: a prospective analysis of prevention-effective adherence.

    PubMed

    Haberer, Jessica E; Kidoguchi, Lara; Heffron, Renee; Mugo, Nelly; Bukusi, Elizabeth; Katabira, Elly; Asiimwe, Stephen; Thomas, Katherine K; Celum, Connie; Baeten, Jared M

    2017-07-25

    Adherence is essential for pre-exposure prophylaxis (PrEP) to protect against HIV acquisition, but PrEP use need not be life-long. PrEP is most efficient when its use is aligned with periods of risk - a concept termed prevention-effective adherence. The objective of this paper is to describe prevention-effective adherence and predictors of adherence within an open-label delivery project of integrated PrEP and antiretroviral therapy (ART) among HIV serodiscordant couples in Kenya and Uganda (the Partners Demonstration Project). We offered PrEP to HIV-uninfected participants until the partner living with HIV had taken ART for ≥6 months (a strategy known as "PrEP as a bridge to ART"). The level of adherence sufficient to protect against HIV was estimated in two ways: ≥4 and ≥6 doses/week (per electronic monitoring). Risk for HIV acquisition was considered high if the couple reported sex with <100% condom use before six months of ART, low if they reported sex but had 100% condom use and/or six months of ART and very low if no sex was reported. We assessed prevention-effective adherence by cross-tabulating PrEP use with HIV risk and used multivariable regression models to assess predictors of ≥4 and ≥6 doses/week. Results A total of 985 HIV-uninfected participants initiated PrEP; 67% were male, median age was twenty-nine years, and 67% reported condomless sex in the month before enrolment. An average of ≥4 doses and ≥6 doses/week were taken in 81% and 67% of participant-visits, respectively. Adherence sufficient to protect against HIV acquisition was achieved in 75-88% of participant-visits with high HIV risk. The strongest predictor of achieving sufficient adherence was reporting sex with the study partner who was living with HIV; other statistically significant predictors included no concerns about daily PrEP, pregnancy or pregnancy intention, females aged >25 years, older male partners and desire for relationship success. Predictors of not achieving sufficient adherence were no longer being a couple, delayed PrEP initiation, >6 months  of follow-up, ART use >6 months  by the partner living with HIV and problem alcohol use. Over three-quarters of participant-visits by HIV-uninfected partners in serodiscordant couples achieved prevention-effective adherence with PrEP. Greater adherence was observed during months with HIV risk and the strongest predictor of achieving sufficient adherence was sexual activity.

  2. Vitamin D Deficiency Among Professional Basketball Players.

    PubMed

    Fishman, Matthew P; Lombardo, Stephen J; Kharrazi, F Daniel

    2016-07-01

    Vitamin D plays an important role in several systems of the human body. Various studies have linked vitamin D deficiency to stress and insufficiency fractures, muscle recovery and function, and athletic performance. The prevalence of vitamin D deficiency in the elite athletic population has not been extensively studied, and very few reports exist among professional athletes. There is a high prevalence of vitamin D deficiency or insufficiency among players attending the National Basketball Association (NBA) Combine. Cross-sectional study; Level of evidence, 3. This is a retrospective review of data previously collected as part of the routine medical evaluation of players in the NBA Combines from 2009 through 2013. Player parameters evaluated were height, weight, body mass index (BMI), and vitamin D level. Statistical analysis using t tests and analysis of variance was used to detect any correlation between the player parameters and vitamin D level. Vitamin D levels were categorized as deficient (<20 ng/mL), insufficient (20-32 ng/mL), and sufficient (>32 ng/mL). After institutional review board approval was submitted to the NBA, the NBA released deidentified data on 279 players who participated in the combines from 2009 through 2013. There were 90 players (32.3%) who were deficient, 131 players (47.0%) who were insufficient, and 58 players (20.8%) who were sufficient. A total of 221 players (79.3%) were either vitamin D deficient or insufficient. Among all players included, the average vitamin D level was 25.6 ± 10.2 ng/mL. Among the players who were deficient, insufficient, and sufficient, the average vitamin D levels were 16.1 ± 2.1 ng/mL, 25.0 ± 3.4 ng/mL, and 41.6 ± 8.6 ng/mL, respectively. Player height and weight were significantly increased in vitamin D-sufficient players compared with players who were not sufficient (P = .0008 and .009, respectively). Player age and BMI did not significantly differ depending on vitamin D status (P = .15 and .77, respectively). There is a high prevalence of vitamin D deficiency or insufficiency among participants in the NBA Combines. As a result, there should be a high suspicion for this metabolic abnormality among elite basketball players. Vitamin D level has been linked to bone health, muscle recovery and function, and athletic performance. Because of the high prevalence of vitamin D deficiency in the NBA Combines, clinicians should maintain a high suspicion for vitamin D abnormalities among elite basketball players.

  3. Star-triangle and star-star relations in statistical mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, R.J.

    1997-01-20

    The homogeneous three-layer Zamolodchikov model is equivalent to a four-state model on the checkerboard lattice which closely resembles the four-state critical Potts model, but with some of its Boltzmann weights negated. Here the author shows that it satisfies a star-to-reverse-star (or simply star-star) relation, even though they know of no star-triangle relation for this model. For any nearest-neighbor checkerboard model, they show that this star-star relation is sufficient to ensure that the decimated model (where half the spins have been summed over) satisfies a twisted Yang-Baxter relation. This ensures that the transfer matrices of the original model commute in pairs,more » which is an adequate condition for solvability.« less

  4. Assessment of economic factors affecting the satellite power system. Volume 2: The systems implications of rectenna siting issues

    NASA Technical Reports Server (NTRS)

    Chapman, P. K.; Bugos, B. J.; Csigi, K. I.; Glaser, P. E.; Schimke, G. R.; Thomas, R. G.

    1979-01-01

    The feasibility was evaluated of finding potential sites for Solar Power Satellite (SPS) receiving antennas (rectennas) in the continental United States, in sufficient numbers to permit the SPS to make a major contribution to U.S. generating facilities, and to give statistical validity to an assessment of the characteristics of such sites and their implications for the design of the SPS system. It is found that the cost-optimum power output of the SPS does not depend on the particular value assigned to the cost per unit area of a rectenna and its site, as long as it is independent of rectenna area. Many characteristics of the sites chosen affect the optimum design of the rectenna itself.

  5. Direct U-Pb dating of Cretaceous and Paleocene dinosaur bones, San Juan Basin, New Mexico: COMMENT

    USGS Publications Warehouse

    Koenig, Alan E.; Lucas, Spencer G.; Neymark, Leonid A.; Heckert, Andrew B.; Sullivan, Robert M.; Jasinski, Steven E.; Fowler, Denver W.

    2012-01-01

    Based on U-Pb dating of two dinosaur bones from the San Juan Basin of New Mexico (United States), Fassett et al. (2011) claim to provide the first successful direct dating of fossil bones and to establish the presence of Paleocene dinosaurs. Fassett et al. ignore previously published work that directly questions their stratigraphic interpretations (Lucas et al., 2009), and fail to provide sufficient descriptions of instrumental, geochronological, and statistical treatments of the data to allow evaluation of the potentially complex diagenetic and recrystallization history of bone. These shortcomings lead us to question the validity of the U-Pb dates published by Fassett et al. and their conclusions regarding the existence of Paleocene dinosaurs.

  6. Conducting-insulating transition in adiabatic memristive networks

    NASA Astrophysics Data System (ADS)

    Sheldon, Forrest C.; Di Ventra, Massimiliano

    2017-01-01

    The development of neuromorphic systems based on memristive elements—resistors with memory—requires a fundamental understanding of their collective dynamics when organized in networks. Here, we study an experimentally inspired model of two-dimensional disordered memristive networks subject to a slowly ramped voltage and show that they undergo a discontinuous transition in the conductivity for sufficiently high values of memory, as quantified by the memristive ON-OFF ratio. We investigate the consequences of this transition for the memristive current-voltage characteristics both through simulation and theory, and demonstrate the role of current-voltage duality in relating forward and reverse switching processes. Our work sheds considerable light on the statistical properties of memristive networks that are presently studied both for unconventional computing and as models of neural networks.

  7. Enhancement of the Triple Alpha Rate in a Hot Dense Medium

    NASA Astrophysics Data System (ADS)

    Beard, Mary; Austin, Sam M.; Cyburt, Richard

    2017-09-01

    In a sufficiently hot and dense astrophysical environment the rate of the triple-alpha (3 α ) reaction can increase greatly over the value appropriate for helium burning stars owing to hadronically induced deexcitation of the Hoyle state. In this Letter we use a statistical model to evaluate the enhancement as a function of temperature and density. For a density of 106 g cm-3 enhancements can exceed a factor of 100. In high temperature or density situations, the enhanced 3 α rate is a better estimate of this rate and should be used in these circumstances. We then examine the effect of these enhancements on production of 12C in the neutrino wind following a supernova explosion and in an x-ray burster.

  8. Cosmic variance in inflation with two light scalars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie

    We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always hasmore » the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.« less

  9. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  10. Next-generation prognostic assessment for diffuse large B-cell lymphoma

    PubMed Central

    Staton, Ashley D; Kof, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts. PMID:26289217

  11. Next-generation prognostic assessment for diffuse large B-cell lymphoma.

    PubMed

    Staton, Ashley D; Koff, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts.

  12. Validity criteria for Fermi's golden rule scattering rates applied to metallic nanowires.

    PubMed

    Moors, Kristof; Sorée, Bart; Magnus, Wim

    2016-09-14

    Fermi's golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.

  13. Optimized multisectioned acoustic liners

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1979-01-01

    New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.

  14. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  15. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  16. Serum levels of 25-hydroxy vitamin D in psoriatic patients*

    PubMed Central

    Zuchi, Manuela Ferrasso; Azevedo, Paula de Oliveira; Tanaka, Anber Ancel; Schmitt, Juliano Vilaverde; Martins, Luis Eduardo Agner Machado

    2015-01-01

    Studies have shown a relationship between vitamin D and psoriasis. We compared serum levels of vitamin D of 20 psoriasis patients and 20 controls. The median vitamin D level was 22.80 ± 4.60 ng/ml; the median in the cases was 23.55 ± 7.6 ng/ml, and in controls 22.35 ± 3.10 ng/ml (p = 0.73). Only 2 cases and 4 controls had sufficient levels of vitamin D, although without statistical significance between the groups (p = 0.608). Levels were lower in women with psoriasis compared with those in male patients (20.85 ± 6.70 x 25.35 ± 2.90 ng/ml, p = 0.03), a finding that was not observed among controls. PMID:26131882

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Protat, A; Young, S

    The objective of this field campaign was to evaluate the performance of the new Leosphere R-MAN 510 lidar, procured by the Australian Bureau of Meteorology, by testing it against the MicroPulse Lidar (MPL) and Raman lidars, at the Darwin Atmospheric Radiation Measurement (ARM) site. This lidar is an eye-safe (355 nm), turn-key mini Raman lidar, which allows for the detection of aerosols and cloud properties, and the retrieval of particulate extinction profiles. To accomplish this evaluation, the R-MAN 510 lidar has been operated at the Darwin ARM site, next to the MPL, Raman lidar, and Vaisala ceilometer (VCEIL) for threemore » months (from 20 January 2013 to 20 April 2013) in order to collect a sufficient sample size for statistical comparisons.« less

  18. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  19. Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.

    We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.

  20. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

Top