Sample records for provide sufficient statistical

  1. Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱

    PubMed Central

    Marcum, Christopher Steven; Butts, Carter T.

    2015-01-01

    The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488

  2. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  3. Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Sagawa, Takahiro

    2018-04-01

    A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.

  4. Determinants of Whether or not Mixtures of Disinfection By-products are Similar

    EPA Science Inventory

    This project summary and its related publications provide information on the development of chemical, toxicological and statistical criteria for determining the sufficient similarity of complex chemical mixtures.

  5. Toward Self Sufficiency: Social Issues in the Nineties. Proceedings of the National Association for Welfare Research and Statistics (33rd, Scottsdale, Arizona, August 7-11, 1993).

    ERIC Educational Resources Information Center

    National Association for Welfare Research and Statistics, Olympia, WA.

    The presentations compiled in these proceedings on welfare and self-sufficiency reflect much of the current research in areas of housing, health, employment and training, welfare and reform, nutrition, child support, child care, and youth. The first section provides information on the conference and on the National Association for Welfare Research…

  6. Statistical Approaches to Assess Biosimilarity from Analytical Data.

    PubMed

    Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette

    2017-01-01

    Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.

  7. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    PubMed

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  8. Purpose Restrictions on Information Use

    DTIC Science & Technology

    2013-06-03

    Employees are authorized to access Customer Information for business purposes only.” [5]. The HIPAA Privacy Rule requires that healthcare providers in the...outcomes can be probabilistic since the network does not know what ad will be best for each visitor but does have statistical information about various...beliefs as such beliefs are a sufficient statistic . Thus, the agent need only consider for each possible belief β it can have, what action it would

  9. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  10. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  11. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  12. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2017-06-01

    One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.

  13. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  14. 77 FR 75196 - Proposed Collection, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-19

    ... from the States' UI accounting files are sufficient for statistical purposes. However, such data are... BLS also has been working very closely with firms providing payroll and tax filing services for.... respondents Respondent responses per response (hours) BLS 3020 (MWR) 133,191 Non-Federal..... 532,764 22.2...

  15. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  16. Stochastic and Historical Resonances of the Unit in Physics and Psychometrics

    ERIC Educational Resources Information Center

    Fisher, William P., Jr.

    2011-01-01

    Humphry's article, "The Role of the Unit in Physics and Psychometrics," offers fundamental clarifications of measurement concepts that Fisher hopes will find a wide audience. In particular, parameterizing discrimination while preserving statistical sufficiency will indeed provide greater flexibility in accounting "for the effects of empirical…

  17. Is Neurofeedback an Efficacious Treatment for ADHD? A Randomised Controlled Clinical Trial

    ERIC Educational Resources Information Center

    Gevensleben, Holger; Holl, Birgit; Albrecht, Bjorn; Vogel, Claudia; Schlamp, Dieter; Kratz, Oliver; Studer, Petra; Rothenberger, Aribert; Moll, Gunther H.; Heinrich, Hartmut

    2009-01-01

    Background: For children with attention deficit/hyperactivity disorder (ADHD), a reduction of inattention, impulsivity and hyperactivity by neurofeedback (NF) has been reported in several studies. But so far, unspecific training effects have not been adequately controlled for andor studies do not provide sufficient statistical power. To overcome…

  18. A Methodological Review of Structural Equation Modelling in Higher Education Research

    ERIC Educational Resources Information Center

    Green, Teegan

    2016-01-01

    Despite increases in the number of articles published in higher education journals using structural equation modelling (SEM), research addressing their statistical sufficiency, methodological appropriateness and quantitative rigour is sparse. In response, this article provides a census of all covariance-based SEM articles published up until 2013…

  19. Estimating Common Parameters of Lognormally Distributed Environmental and Biomonitoring Data: Harmonizing Disparate Statistics from Publications

    EPA Science Inventory

    The progression of science is driven by the accumulation of knowledge and builds upon published work of others. Another important feature is to place current results into the context of previous observations. The published literature, however, often does not provide sufficient di...

  20. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152

  1. Methods for the evaluation of alternative disaster warning systems

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.

    1977-01-01

    For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.

  2. Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui

    2018-04-01

    This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.

  3. UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.

    PubMed

    Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois

    2018-03-01

    Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.

  4. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  5. An empirical approach to sufficient similarity in dose-responsiveness: Utilization of statistical distance as a similarity measure.

    EPA Science Inventory

    Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...

  6. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    PubMed

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  8. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings.

    PubMed

    De Luca, Carlo J; Kline, Joshua C

    2014-12-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles--a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. Copyright © 2014 the American Physiological Society.

  9. A Methodology for Conducting Space Utilization Studies within Department of Defense Medical Facilities

    DTIC Science & Technology

    1992-07-01

    database programs, such as dBase or Microsoft Excell, to yield statistical reports that can profile the health care facility . Ladeen (1989) feels that the...service specific space status report would be beneficial to the specific service(s) under study, it would not provide sufficient data for facility -wide...change in the Master Space Plan. The revised methodology also provides a mechanism and forum for spuce management education within the facility . The

  10. Targeted On-Demand Team Performance App Development

    DTIC Science & Technology

    2016-10-01

    from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes.  What opportunities for

  11. Georg Rasch and Benjamin Wright's Struggle with the Unidimensional Polytomous Model with Sufficient Statistics

    ERIC Educational Resources Information Center

    Andrich, David

    2016-01-01

    This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…

  12. On the statistical assessment of classifiers using DNA microarray data

    PubMed Central

    Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F

    2006-01-01

    Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171

  13. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    PubMed

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Direct measurement of the biphoton Wigner function through two-photon interference

    PubMed Central

    Douce, T.; Eckstein, A.; Walborn, S. P.; Khoury, A. Z.; Ducci, S.; Keller, A.; Coudreau, T.; Milman, P.

    2013-01-01

    The Hong-Ou-Mandel (HOM) experiment was a benchmark in quantum optics, evidencing the non–classical nature of photon pairs, later generalized to quantum systems with either bosonic or fermionic statistics. We show that a simple modification in the well-known and widely used HOM experiment provides the direct measurement of the Wigner function. We apply our results to one of the most reliable quantum systems, consisting of biphotons generated by parametric down conversion. A consequence of our results is that a negative value of the Wigner function is a sufficient condition for non-gaussian entanglement between two photons. In the general case, the Wigner function provides all the required information to infer entanglement using well known necessary and sufficient criteria. The present work offers a new vision of the HOM experiment that further develops its possibilities to realize fundamental tests of quantum mechanics using simple optical set-ups. PMID:24346262

  15. Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations

    DTIC Science & Technology

    2006-01-01

    such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses

  16. Hypothesis-Testing Demands Trustworthy Data—A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy

    PubMed Central

    Krefeld-Schwalb, Antonia; Witte, Erich H.; Zenker, Frank

    2018-01-01

    In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis. PMID:29740363

  17. Hypothesis-Testing Demands Trustworthy Data-A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy.

    PubMed

    Krefeld-Schwalb, Antonia; Witte, Erich H; Zenker, Frank

    2018-01-01

    In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H 0 -hypothesis to a statistical H 1 -verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a "pure" Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.

  18. From innervation density to tactile acuity: 1. Spatial representation.

    PubMed

    Brown, Paul B; Koerber, H Richard; Millecchia, Ronald

    2004-06-11

    We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.

  19. Unsupervised Topic Discovery by Anomaly Detection

    DTIC Science & Technology

    2013-09-01

    Kullback , and R. A. Leibler , “On information and sufficiency,” Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951. [14] S. Basu, A...read known publicly. There is a strong interest in the analysis of these opinions and comments as they provide useful information about the sentiments...them as topics. The difficulty in this approach is finding a good set of keywords that accurately represents the documents. The method used to

  20. Design and Feasibility Assessment of a Retrospective Epidemiological Study of Coal-Fired Power Plant Emissions in the Pittsburgh Pennsylvania Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard A. Bilonick; Daniel Connell; Evelyn Talbott

    2006-12-20

    Eighty-nine (89) percent of the electricity supplied in the 35-county Pittsburgh region (comprising parts of the states of Pennsylvania, Ohio, West Virginia, and Maryland) is generated by coal-fired power plants making this an ideal region in which to study the effects of the fine airborne particulates designated as PM{sub 2.5} emitted by the combustion of coal. This report demonstrates that during the period from 1999-2006 (1) sufficient and extensive exposure data, in particular samples of speciated PM{sub 2.5} components from 1999 to 2003, and including gaseous co-pollutants and weather have been collected, (2) sufficient and extensive mortality, morbidity, and relatedmore » health outcomes data are readily available, and (3) the relationship between health effects and fine particulates can most likely be satisfactorily characterized using a combination of sophisticated statistical methodologies including latent variable modeling (LVM) and generalized linear autoregressive moving average (GLARMA) time series analysis. This report provides detailed information on the available exposure data and the available health outcomes data for the construction of a comprehensive database suitable for analysis, illustrates the application of various statistical methods to characterize the relationship between health effects and exposure, and provides a road map for conducting the proposed study. In addition, a detailed work plan for conducting the study is provided and includes a list of tasks and an estimated budget. A substantial portion of the total study cost is attributed to the cost of analyzing a large number of archived PM{sub 2.5} filters. Analysis of a representative sample of the filters supports the reliability of this invaluable but as-yet untapped resource. These filters hold the key to having sufficient data on the components of PM{sub 2.5} but have a limited shelf life. If the archived filters are not analyzed promptly the important and costly information they contain will be lost.« less

  1. Statistics of work performed on a forced quantum oscillator.

    PubMed

    Talkner, Peter; Burada, P Sekhar; Hänggi, Peter

    2008-07-01

    Various aspects of the statistics of work performed by an external classical force on a quantum mechanical system are elucidated for a driven harmonic oscillator. In this special case two parameters are introduced that are sufficient to completely characterize the force protocol. Explicit results for the characteristic function of work and the corresponding probability distribution are provided and discussed for three different types of initial states of the oscillator: microcanonical, canonical, and coherent states. Depending on the choice of the initial state the probability distributions of the performed work may greatly differ. This result in particular also holds true for identical force protocols. General fluctuation and work theorems holding for microcanonical and canonical initial states are confirmed.

  2. Statistical foundations of liquid-crystal theory

    PubMed Central

    Seguin, Brian; Fried, Eliot

    2013-01-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals. PMID:23772091

  3. Issues in the classification of disease instances with ontologies.

    PubMed

    Burgun, Anita; Bodenreider, Olivier; Jacquelinet, Christian

    2005-01-01

    Ontologies define classes of entities and their interrelations. They are used to organize data according to a theory of the domain. Towards that end, ontologies provide class definitions (i.e., the necessary and sufficient conditions for defining class membership). In medical ontologies, it is often difficult to establish such definitions for diseases. We use three examples (anemia, leukemia and schizophrenia) to illustrate the limitations of ontologies as classification resources. We show that eligibility criteria are often more useful than the Aristotelian definitions traditionally used in ontologies. Examples of eligibility criteria for diseases include complex predicates such as ' x is an instance of the class C when at least n criteria among m are verified' and 'symptoms must last at least one month if not treated, but less than one month, if effectively treated'. References to normality and abnormality are often found in disease definitions, but the operational definition of these references (i.e., the statistical and contextual information necessary to define them) is rarely provided. We conclude that knowledge bases that include probabilistic and statistical knowledge as well as rule-based criteria are more useful than Aristotelian definitions for representing the predicates defined by necessary and sufficient conditions. Rich knowledge bases are needed to clarify the relations between individuals and classes in various studies and applications. However, as ontologies represent relations among classes, they can play a supporting role in disease classification services built primarily on knowledge bases.

  4. A new hearing protector rating: The Noise Reduction Statistic for use with A weighting (NRSA).

    NASA Astrophysics Data System (ADS)

    Berger, Elliott H.; Gauger, Dan

    2004-05-01

    An important question to ask in regard to hearing protection devices (HPDs) is how much hearing protection they can provide. With respect to the law, at least, this question was answered in 1979 when the U. S. Environmental Protection Agency (EPA) promulgated a labeling regulation specifying a Noise Reduction Rating (NRR) measured in decibels (dB). In the intervening 25 years many concerns have arisen over this regulation. Currently the EPA is considering proposing a revised rule. This report examines the relevant issues in order to provide recommendations for new ratings and a new method of obtaining the test data. The conclusion is that a Noise Reduction Statistic for use with A weighting (NRSA), an A-A' rating computed in a manner that considers both intersubject and interspectrum variation in protection, yields sufficient precision. Two such statistics ought to be specified on the primary package label-the smaller one to indicate the protection that is possible for most users to exceed, and a larger one such that the range between the two numbers conveys to the user the uncertainty in protection provided. Guidance on how to employ these numbers, and a suggestion for an additional, more precise, graphically oriented rating to be provided on a secondary label, are also included.

  5. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  6. FNA, core biopsy, or both for the diagnosis of lung carcinoma: Obtaining sufficient tissue for a specific diagnosis and molecular testing.

    PubMed

    Coley, Shana M; Crapanzano, John P; Saqi, Anjali

    2015-05-01

    Increasingly, minimally invasive procedures are performed to assess lung lesions and stage lung carcinomas. In cases of advanced-stage lung cancer, the biopsy may provide the only diagnostic tissue. The aim of this study was to determine which method-fine-needle aspiration (FNA), core biopsy (CBx), or both (B)--is optimal for providing sufficient tissue for rendering a specific diagnosis and pursuing molecular studies for guiding tumor-specific treatment. A search was performed for computed tomography-guided lung FNA, CBx, or B cases with rapid onsite evaluation. Carcinomas were assessed for the adequacy to render a specific diagnosis; this was defined as enough refinement to subtype a primary carcinoma or to assess a metastatic origin morphologically and/or immunohistochemically. In cases of primary lung adenocarcinoma, the capability of each modality to yield sufficient tissue for molecular studies (epidermal growth factor receptor, KRAS, or anaplastic lymphoma kinase) was also assessed. There were 210 cases, and 134 represented neoplasms, including 115 carcinomas. For carcinomas, a specific diagnosis was reached in 89% of FNA cases (33 of 37), 98% of CBx cases (43 of 44), and 100% of B cases (34 of 34). For primary lung adenocarcinomas, adequate tissue remained to perform molecular studies in 94% of FNA cases (16 of 17), 100% of CBx cases (19 of 19), and 86% of B cases (19 of 22). No statistical difference was found among the modalities for either reaching a specific diagnosis (p = .07, Fisher exact test) or providing sufficient tissue for molecular studies (p = .30, Fisher exact test). The results suggest that FNA, CBx, and B are comparable for arriving at a specific diagnosis and having sufficient tissue for molecular studies: they specifically attained the diagnostic and prognostic goals of minimally invasive procedures for lung carcinoma. © 2015 American Cancer Society.

  7. Statistical robustness of machine-learning estimates for characterizing a groundwater-surface water system, Southland, New Zealand

    NASA Astrophysics Data System (ADS)

    Friedel, M. J.; Daughney, C.

    2016-12-01

    The development of a successful surface-groundwater management strategy depends on the quality of data provided for analysis. This study evaluates the statistical robustness when using a modified self-organizing map (MSOM) technique to estimate missing values for three hypersurface models: synoptic groundwater-surface water hydrochemistry, time-series of groundwater-surface water hydrochemistry, and mixed-survey (combination of groundwater-surface water hydrochemistry and lithologies) hydrostratigraphic unit data. These models of increasing complexity are developed and validated based on observations from the Southland region of New Zealand. In each case, the estimation method is sufficiently robust to cope with groundwater-surface water hydrochemistry vagaries due to sample size and extreme data insufficiency, even when >80% of the data are missing. The estimation of surface water hydrochemistry time series values enabled the evaluation of seasonal variation, and the imputation of lithologies facilitated the evaluation of hydrostratigraphic controls on groundwater-surface water interaction. The robust statistical results for groundwater-surface water models of increasing data complexity provide justification to apply the MSOM technique in other regions of New Zealand and abroad.

  8. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  9. Infants Segment Continuous Events Using Transitional Probabilities

    ERIC Educational Resources Information Center

    Stahl, Aimee E.; Romberg, Alexa R.; Roseberry, Sarah; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn

    2014-01-01

    Throughout their 1st year, infants adeptly detect statistical structure in their environment. However, little is known about whether statistical learning is a primary mechanism for event segmentation. This study directly tests whether statistical learning alone is sufficient to segment continuous events. Twenty-eight 7- to 9-month-old infants…

  10. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  11. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  12. Distinguishing Internet-facing ICS Devices Using PLC Programming Information

    DTIC Science & Technology

    2014-06-19

    the course of this research. I would also like to thank the faculty and students at AFIT who helped me think through some of these problems. Finally, I...21 FTP 143 IMAP 1900 UPnP 6379 Redis 22 SSH 161 SNMP 2323 Telnet 7777 Oracle 23 Telnet 443 HTTPS 3306 MySQL 8000 Qconn 25 SMTP 445 SMB 3389 RDP 8080...250ms over the course of 10,000 samples provided sufficient data to test for statistically significant changes to ladder logic execution times with a

  13. Statistical foundations of liquid-crystal theory: I. Discrete systems of rod-like molecules.

    PubMed

    Seguin, Brian; Fried, Eliot

    2012-12-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals.

  14. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  15. Use of the Rasch model for initial testing of fit statistics and rating scale diagnosis for a general anesthesia satisfaction questionnaire.

    PubMed

    Hawkins, Robert J; Kremer, Michael J; Swanson, Barbara; Fogg, Lou; Pierce, Penny; Pearson, Julie

    2014-01-01

    The level of patient satisfaction is a result of a complex set of interactions between the patient and the health care provider. It is important to quantify satisfaction with care because it involves the patient in the care experience and decreases the potential gap between expected and actual care delivered. We tested a preliminary 23-item instrument to measure patient satisfaction with general anesthesia care. The rating scale Rasch model was chosen as the framework. There were 10 items found to have sufficient evidence of stable fit statistics. Items included 2 questions related to information provided, 2 questions related to concern and kindness of the provider, and 1 question each for interpersonal skills of the provider, attention by the provider, feeling safe, well-being, privacy, and overall anesthesia satisfaction. Such actions as providing enough time to understand the anesthesia plan, answering questions related to the anesthetic, showing kindness and concern for the patient, displaying good interpersonal skills, providing adequate attention to the patient, providing a safe environment that maintains privacy and provides a sense of well-being are important actions that are well within the control of individual anesthesia providers and may lead to improved care from the perception of the patient.

  16. Statistics and Title VII Proof: Prima Facie Case and Rebuttal.

    ERIC Educational Resources Information Center

    Whitten, David

    1978-01-01

    The method and means by which statistics can raise a prima facie case of Title VII violation are analyzed. A standard is identified that can be applied to determine whether a statistical disparity is sufficient to shift the burden to the employer to rebut a prima facie case of discrimination. (LBH)

  17. Sustaining food self-sufficiency of a nation: The case of Sri Lankan rice production and related water and fertilizer demands.

    PubMed

    Davis, Kyle Frankel; Gephart, Jessica A; Gunda, Thushara

    2016-04-01

    Rising human demand and climatic variability have created greater uncertainty regarding global food trade and its effects on the food security of nations. To reduce reliance on imported food, many countries have focused on increasing their domestic food production in recent years. With clear goals for the complete self-sufficiency of rice production, Sri Lanka provides an ideal case study for examining the projected growth in domestic rice supply, how this compares to future national demand, and what the associated impacts from water and fertilizer demands may be. Using national rice statistics and estimates of intensification, this study finds that improvements in rice production can feed 25.3 million Sri Lankans (compared to a projected population of 23.8 million people) by 2050. However, to achieve this growth, consumptive water use and nitrogen fertilizer application may need to increase by as much as 69 and 23 %, respectively. This assessment demonstrates that targets for maintaining self-sufficiency should better incorporate avenues for improving resource use efficiency.

  18. Retrocausal Effects As A Consequence of Orthodox Quantum Mechanics Refined To Accommodate The Principle Of Sufficient Reason

    NASA Astrophysics Data System (ADS)

    Stapp, Henry P.

    2011-11-01

    The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determined by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.

  19. Finite Element Analysis of Reverberation Chambers

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.; Nguyen, Duc T.

    2000-01-01

    The primary motivating factor behind the initiation of this work was to provide a deterministic means of establishing the validity of the statistical methods that are recommended for the determination of fields that interact in -an avionics system. The application of finite element analysis to reverberation chambers is the initial step required to establish a reasonable course of inquiry in this particularly data-intensive study. The use of computational electromagnetics provides a high degree of control of the "experimental" parameters that can be utilized in a simulation of reverberating structures. As the work evolved there were four primary focus areas they are: 1. The eigenvalue problem for the source free problem. 2. The development of a complex efficient eigensolver. 3. The application of a source for the TE and TM fields for statistical characterization. 4. The examination of shielding effectiveness in a reverberating environment. One early purpose of this work was to establish the utility of finite element techniques in the development of an extended low frequency statistical model for reverberation phenomena. By employing finite element techniques, structures of arbitrary complexity can be analyzed due to the use of triangular shape functions in the spatial discretization. The effects of both frequency stirring and mechanical stirring are presented. It is suggested that for the low frequency operation the typical tuner size is inadequate to provide a sufficiently random field and that frequency stirring should be used. The results of the finite element analysis of the reverberation chamber illustrate io-W the potential utility of a 2D representation for enhancing the basic statistical characteristics of the chamber when operating in a low frequency regime. The basic field statistics are verified for frequency stirring over a wide range of frequencies. Mechanical stirring is shown to provide an effective frequency deviation.

  20. Issues in the Classification of Disease Instances with Ontologies

    PubMed Central

    Burgun, Anita; Bodenreider, Olivier; Jacquelinet, Christian

    2006-01-01

    Ontologies define classes of entities and their interrelations. They are used to organize data according to a theory of the domain. Towards that end, ontologies provide class definitions (i.e., the necessary and sufficient conditions for defining class membership). In medical ontologies, it is often difficult to establish such definitions for diseases. We use three examples (anemia, leukemia and schizophrenia) to illustrate the limitations of ontologies as classification resources. We show that eligibility criteria are often more useful than the Aristotelian definitions traditionally used in ontologies. Examples of eligibility criteria for diseases include complex predicates such as ‘ x is an instance of the class C when at least n criteria among m are verified’ and ‘symptoms must last at least one month if not treated, but less than one month, if effectively treated’. References to normality and abnormality are often found in disease definitions, but the operational definition of these references (i.e., the statistical and contextual information necessary to define them) is rarely provided. We conclude that knowledge bases that include probabilistic and statistical knowledge as well as rule-based criteria are more useful than Aristotelian definitions for representing the predicates defined by necessary and sufficient conditions. Rich knowledge bases are needed to clarify the relations between individuals and classes in various studies and applications. However, as ontologies represent relations among classes, they can play a supporting role in disease classification services built primarily on knowledge bases. PMID:16160339

  1. Vibration Transmission through Rolling Element Bearings in Geared Rotor Systems

    DTIC Science & Technology

    1990-11-01

    147 4.8 Concluding Remarks ........................................................... 153 V STATISTICAL ENERGY ANALYSIS ............................................ 155...and dynamic finite element techniques are used to develop the discrete vibration models while statistical energy analysis method is used for the broad...bearing system studies, geared rotor system studies, and statistical energy analysis . Each chapter is self sufficient since it is written in a

  2. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  3. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  4. A robust signalling system for land mobile satellite services

    NASA Technical Reports Server (NTRS)

    Irish, Dale; Shmith, Gary; Hart, Nick; Wines, Marie

    1989-01-01

    Presented here is a signalling system optimized to ensure expedient call set-up for satellite telephony services in a land mobile environment. In a land mobile environment, the satellite to mobile link is subject to impairments from multipath and shadowing phenomena, which result in signal amplitude and phase variations. Multipath, caused by signal scattering and reflections, results in sufficient link margin to compensate for these variations. Direct signal attenuation caused by shadowing due to buildings and vegetation may result in attenuation values in excess of 10 dB and commonly up to 20 dB. It is not practical to provide a link with sufficient margin to enable communication when the signal is blocked. When a moving vehicle passes these obstacles, the link will experience rapid changes in signal strength due to shadowing. Using statistical models of attenuation as a function of distance travelled, a communication strategy has been defined for the land mobile environment.

  5. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  6. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  7. The Content of Statistical Requirements for Authors in Biomedical Research Journals

    PubMed Central

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-01-01

    Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible. PMID:27748343

  8. The Content of Statistical Requirements for Authors in Biomedical Research Journals.

    PubMed

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-10-20

    Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible.

  9. Phenomenological constraints on the bulk viscosity of QCD

    NASA Astrophysics Data System (ADS)

    Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Jeon, Sangyong; Gale, Charles

    2017-11-01

    While small at very high temperature, the bulk viscosity of Quantum Chromodynamics is expected to grow in the confinement region. Although its precise magnitude and temperature-dependence in the cross-over region is not fully understood, recent theoretical and phenomenological studies provided evidence that the bulk viscosity can be sufficiently large to have measurable consequences on the evolution of the quark-gluon plasma. In this work, a Bayesian statistical analysis is used to establish probabilistic constraints on the temperature-dependence of bulk viscosity using hadronic measurements from RHIC and LHC.

  10. The Exponentially Embedded Family of Distributions for Effective Data Representation, Information Extraction, and Decision Making

    DTIC Science & Technology

    2013-03-01

    information ex- traction and learning from data. First of all, it admits sufficient statistics and therefore, provides the means for selecting good models...readily found since the Kullback -Liebler divergence can be used to ascertain distances between PDFs for various hypothesis testing scenarios. We...t1, t2) Information content of T2 (x) is D(pryj,IJ2(tl, t2)11Pryj,!J2=0(tl, t2)) = reduction in distance to true PDF where D(p1llp2) is Kullback

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkley, Eric D.; Sego, Landon H.; Lin, Andy

    Adaptive processes in bacterial species can occur rapidly in laboratory culture, leading to genetic divergence between naturally occurring and laboratory-adapted strains. Differentiating wild and closely-related laboratory strains is clearly important for biodefense and bioforensics; however, DNA sequence data alone has thus far not provided a clear signature, perhaps due to lack of understanding of how diverse genome changes lead to adapted phenotypes. Protein abundance profiles from mass spectrometry-based proteomics analyses are a molecular measure of phenotype. Proteomics data contains sufficient information that powerful statistical methods can uncover signatures that distinguish wild strains of Yersinia pestis from laboratory-adapted strains.

  12. SDGs and Geospatial Frameworks: Data Integration in the United States

    NASA Astrophysics Data System (ADS)

    Trainor, T.

    2016-12-01

    Responding to the need to monitor a nation's progress towards meeting the Sustainable Development Goals (SDG) outlined in the 2030 U.N. Agenda requires the integration of earth observations with statistical information. The urban agenda proposed in SDG 11 challenges the global community to find a geospatial approach to monitor and measure inclusive, safe, resilient, and sustainable cities and communities. Target 11.7 identifies public safety, accessibility to green and public spaces, and the most vulnerable populations (i.e., women and children, older persons, and persons with disabilities) as the most important priorities of this goal. A challenge for both national statistical organizations and earth observation agencies in addressing SDG 11 is the requirement for detailed statistics at a sufficient spatial resolution to provide the basis for meaningful analysis of the urban population and city environments. Using an example for the city of Pittsburgh, this presentation proposes data and methods to illustrate how earth science and statistical data can be integrated to respond to Target 11.7. Finally, a preliminary series of data initiatives are proposed for extending this method to other global cities.

  13. Retrocausal Effects as a Consequence of Quantum Mechanics Refined to Accommodate the Principle of Sufficient Reason

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry P.

    2011-05-10

    The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determinedmore » by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.« less

  14. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  15. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    PubMed

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  16. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay

    2016-10-01

    Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.

  17. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  18. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  19. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  20. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  1. Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.

    PubMed

    Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R

    2007-12-01

    After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.

  2. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    DTIC Science & Technology

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  3. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  4. Corpus-based Statistical Screening for Phrase Identification

    PubMed Central

    Kim, Won; Wilbur, W. John

    2000-01-01

    Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469

  5. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Decay pattern of the Pygmy Dipole Resonance in 130Te

    NASA Astrophysics Data System (ADS)

    Isaak, J.; Beller, J.; Fiori, E.; Krtička, M.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Scheck, M.; Silva, J.; Sonnabend, K.; Tonchev, A.; Tornow, W.; Weller, H.; Zweidinger, M.

    2014-03-01

    The electric dipole strength distribution in 130Te has been investigated using the method of Nuclear Resonance Fluorescence. The experiments were performed at the Darmstadt High Intensity Photon Setup using bremsstrahlung as photon source and at the High Intensity overrightarrow γ -Ray Source, where quasi-monochromatic and polarized photon beams are provided. Average decay properties of 130Te below the neutron separation energy are determined. Comparing the experimental data to the predictions of the statistical model indicate, that nuclear structure effects play an important role even at sufficiently high excitation energies. Preliminary results will be presented.

  7. Role of sufficient phosphorus in biodiesel production from diatom Phaeodactylum tricornutum.

    PubMed

    Yu, Shi-Jin; Shen, Xiao-Fei; Ge, Huo-Qing; Zheng, Hang; Chu, Fei-Fei; Hu, Hao; Zeng, Raymond J

    2016-08-01

    In order to study the role of sufficient phosphorus (P) in biodiesel production by microalgae, Phaeodactylum tricornutum were cultivated in six different media treatments with combination of nitrogen (N) sufficiency/deprivation and phosphorus sufficiency/limitation/deprivation. Profiles of N and P, biomass, and fatty acids (FAs) content and compositions were measured during a 7-day cultivation period. The results showed that the FA content in microalgae biomass was promoted by P deprivation. However, statistical analysis showed that FA productivity had no significant difference (p = 0.63, >0.05) under the treatments of N deprivation with P sufficiency (N-P) and N deprivation with P deprivation (N-P-), indicating P sufficiency in N deprivation medium has little effect on increasing biodiesel productivity from P. triornutum. It was also found that the P absorption in N-P medium was 1.41 times higher than that in N sufficiency and P sufficiency (NP) medium. N deprivation with P limitation (N-P-l) was the optimal treatment for producing biodiesel from P. triornutum because of both the highest FA productivity and good biodiesel quality.

  8. Direct U-Pb dating of Cretaceous and Paleocene dinosaur bones, San Juan Basin, New Mexico: COMMENT

    USGS Publications Warehouse

    Koenig, Alan E.; Lucas, Spencer G.; Neymark, Leonid A.; Heckert, Andrew B.; Sullivan, Robert M.; Jasinski, Steven E.; Fowler, Denver W.

    2012-01-01

    Based on U-Pb dating of two dinosaur bones from the San Juan Basin of New Mexico (United States), Fassett et al. (2011) claim to provide the first successful direct dating of fossil bones and to establish the presence of Paleocene dinosaurs. Fassett et al. ignore previously published work that directly questions their stratigraphic interpretations (Lucas et al., 2009), and fail to provide sufficient descriptions of instrumental, geochronological, and statistical treatments of the data to allow evaluation of the potentially complex diagenetic and recrystallization history of bone. These shortcomings lead us to question the validity of the U-Pb dates published by Fassett et al. and their conclusions regarding the existence of Paleocene dinosaurs.

  9. Surveillance of sexually transmitted infections in England and Wales.

    PubMed

    Hughes, G; Paine, T; Thomas, D

    2001-05-01

    Surveillance of sexually transmitted infections (STIs) in England and Wales has, in the past, relied principally on aggregated statistical data submitted by all genitourinary medicine clinics to the Communicable Disease Surveillance Centre, supplemented by various laboratory reporting systems. Although these systems provide comparatively robust surveillance data, they do not provide sufficient information on risk factors to target STI control and prevention programmes appropriately. Over recent years, substantial rises in STIs, the emergence of numerous outbreaks of STIs, and changes in gonococcal resistance patterns have necessitated the introduction of more sophisticated surveillance mechanisms. This article describes current STI surveillance systems in England and Wales, including new systems that have recently been introduced or are currently being developed to meet the need for enhanced STI surveillance data.

  10. Recruitment of Older Adults: Success May Be in the Details

    PubMed Central

    McHenry, Judith C.; Insel, Kathleen C.; Einstein, Gilles O.; Vidrine, Amy N.; Koerner, Kari M.; Morrow, Daniel G.

    2015-01-01

    Purpose: Describe recruitment strategies used in a randomized clinical trial of a behavioral prospective memory intervention to improve medication adherence for older adults taking antihypertensive medication. Results: Recruitment strategies represent 4 themes: accessing an appropriate population, communication and trust-building, providing comfort and security, and expressing gratitude. Recruitment activities resulted in 276 participants with a mean age of 76.32 years, and study enrollment included 207 women, 69 men, and 54 persons representing ethnic minorities. Recruitment success was linked to cultivating relationships with community-based organizations, face-to-face contact with potential study participants, and providing service (e.g., blood pressure checks) as an access point to eligible participants. Seventy-two percent of potential participants who completed a follow-up call and met eligibility criteria were enrolled in the study. The attrition rate was 14.34%. Implications: The projected increase in the number of older adults intensifies the need to study interventions that improve health outcomes. The challenge is to recruit sufficient numbers of participants who are also representative of older adults to test these interventions. Failing to recruit a sufficient and representative sample can compromise statistical power and the generalizability of study findings. PMID:22899424

  11. Stationary conditions for stochastic differential equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Walker, W. W.

    1972-01-01

    This is a preliminary study of possible necessary and sufficient conditions to insure stationarity in the solution process for a stochastic differential equation. It indirectly sheds some light on ergodicity properties and shows that the spectral density is generally inadequate as a statistical measure of the solution. Further work is proceeding on a more general theory which gives necessary and sufficient conditions in a form useful for applications.

  12. Feature-Based Statistical Analysis of Combustion Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, J; Krishnamoorthy, V; Liu, S

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing andmore » reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion science; however, it is applicable to many other science domains.« less

  13. Rock Statistics at the Mars Pathfinder Landing Site, Roughness and Roving on Mars

    NASA Technical Reports Server (NTRS)

    Haldemann, A. F. C.; Bridges, N. T.; Anderson, R. C.; Golombek, M. P.

    1999-01-01

    Several rock counts have been carried out at the Mars Pathfinder landing site producing consistent statistics of rock coverage and size-frequency distributions. These rock statistics provide a primary element of "ground truth" for anchoring remote sensing information used to pick the Pathfinder, and future, landing sites. The observed rock population statistics should also be consistent with the emplacement and alteration processes postulated to govern the landing site landscape. The rock population databases can however be used in ways that go beyond the calculation of cumulative number and cumulative area distributions versus rock diameter and height. Since the spatial parameters measured to characterize each rock are determined with stereo image pairs, the rock database serves as a subset of the full landing site digital terrain model (DTM). Insofar as a rock count can be carried out in a speedier, albeit coarser, manner than the full DTM analysis, rock counting offers several operational and scientific products in the near term. Quantitative rock mapping adds further information to the geomorphic study of the landing site, and can also be used for rover traverse planning. Statistical analysis of the surface roughness using the rock count proxy DTM is sufficiently accurate when compared to the full DTM to compare with radar remote sensing roughness measures, and with rover traverse profiles.

  14. Failure statistics for commercial lithium ion batteries: A study of 24 pouch cells

    NASA Astrophysics Data System (ADS)

    Harris, Stephen J.; Harris, David J.; Li, Chen

    2017-02-01

    There are relatively few publications that assess capacity decline in enough commercial cells to quantify cell-to-cell variation, but those that do show a surprisingly wide variability. Capacity curves cross each other often, a challenge for efforts to measure the state of health and predict the remaining useful life (RUL) of individual cells. We analyze capacity fade statistics for 24 commercial pouch cells, providing an estimate for the time to 5% failure. Our data indicate that RUL predictions based on remaining capacity or internal resistance are accurate only once the cells have already sorted themselves into "better" and "worse" ones. Analysis of our failure data, using maximum likelihood techniques, provide uniformly good fits for a variety of definitions of failure with normal and with 2- and 3-parameter Weibull probability density functions, but we argue against using a 3-parameter Weibull function for our data. pdf fitting parameters appear to converge after about 15 failures, although business objectives should ultimately determine whether data from a given number of batteries provides sufficient confidence to end lifecycle testing. Increased efforts to make batteries with more consistent lifetimes should lead to improvements in battery cost and safety.

  15. Digital Holographic Microscopy, a Method for Detection of Microorganisms in Plume Samples from Enceladus and Other Icy Worlds

    PubMed Central

    Bedrossian, Manuel; Lindensmith, Chris

    2017-01-01

    Abstract Detection of extant microbial life on Earth and elsewhere in the Solar System requires the ability to identify and enumerate micrometer-scale, essentially featureless cells. On Earth, bacteria are usually enumerated by culture plating or epifluorescence microscopy. Culture plates require long incubation times and can only count culturable strains, and epifluorescence microscopy requires extensive staining and concentration of the sample and instrumentation that is not readily miniaturized for space. Digital holographic microscopy (DHM) represents an alternative technique with no moving parts and higher throughput than traditional microscopy, making it potentially useful in space for detection of extant microorganisms provided that sufficient numbers of cells can be collected. Because sample collection is expected to be the limiting factor for space missions, especially to outer planets, it is important to quantify the limits of detection of any proposed technique for extant life detection. Here we use both laboratory and field samples to measure the limits of detection of an off-axis digital holographic microscope (DHM). A statistical model is used to estimate any instrument's probability of detection at various bacterial concentrations based on the optical performance characteristics of the instrument, as well as estimate the confidence interval of detection. This statistical model agrees well with the limit of detection of 103 cells/mL that was found experimentally with laboratory samples. In environmental samples, active cells were immediately evident at concentrations of 104 cells/mL. Published estimates of cell densities for Enceladus plumes yield up to 104 cells/mL, which are well within the off-axis DHM's limits of detection to confidence intervals greater than or equal to 95%, assuming sufficient sample volumes can be collected. The quantitative phase imaging provided by DHM allowed minerals to be distinguished from cells. Off-axis DHM's ability for rapid low-level bacterial detection and counting shows its viability as a technique for detection of extant microbial life provided that the cells can be captured intact and delivered to the sample chamber in a sufficient volume of liquid for imaging. Key Words: In situ life detection—Extant microorganisms—Holographic microscopy—Ocean Worlds—Enceladus—Imaging. Astrobiology 17, 913–925. PMID:28708412

  16. Statistical variances of diffusional properties from ab initio molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei

    2018-12-01

    Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.

  17. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  18. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  19. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  20. Factor structure and dimensionality of the two depression scales in STAR*D using level 1 datasets.

    PubMed

    Bech, P; Fava, M; Trivedi, M H; Wisniewski, S R; Rush, A J

    2011-08-01

    The factor structure and dimensionality of the HAM-D(17) and the IDS-C(30) are as yet uncertain, because psychometric analyses of these scales have been performed without a clear separation between factor structure profile and dimensionality (total scores being a sufficient statistic). The first treatment step (Level 1) in the STAR*D study provided a dataset of 4041 outpatients with DSM-IV nonpsychotic major depression. The HAM-D(17) and IDS-C(30) were evaluated by principal component analysis (PCA) without rotation. Mokken analysis tested the unidimensionality of the IDS-C(6), which corresponds to the unidimensional HAM-D(6.) For both the HAM-D(17) and IDS-C(30), PCA identified a bi-directional factor contrasting the depressive symptoms versus the neurovegetative symptoms. The HAM-D(6) and the corresponding IDS-C(6) symptoms all emerged in the depression factor. Both the HAM-D(6) and IDS-C(6) were found to be unidimensional scales, i.e., their total scores are each a sufficient statistic for the measurement of depressive states. STAR*D used only one medication in Level 1. The unidimensional HAM-D(6) and IDS-C(6) should be used when evaluating the pure clinical effect of antidepressive treatment, whereas the multidimensional HAM-D(17) and IDS-C(30) should be considered when selecting antidepressant treatment. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Oxidative stress tolerance in intertidal red seaweed Hypnea musciformis (Wulfen) in relation to environmental components.

    PubMed

    Maharana, Dusmant; Das, Priya Brata; Verlecar, Xivanand N; Pise, Navnath M; Gauns, Manguesh

    2015-12-01

    Oxidative stress parameters in relation to temperature and other factors have been analysed in Hypnea musciformis, the red seaweed from Anjuna beach, Goa, with an aim to understand its susceptibility to the changing seasons. The results indicate that elevated temperature, sunshine and dessication during peak summer in May enhanced the activity of lipid peroxide, hydrogen peroxide and antioxidants such as catalase, glutathione and ascorbic acid. Statistical tests using multivariate analysis of variance and correlation analysis showed that oxidative stress and antioxidants maintain significant relation with temperature, salinity, sunshine and pH at an individual or interactive level. The dissolved nitrates, phosphates and biological oxygen demand in ambient waters and the trace metals in seaweeds maintained sufficiently low values to provide any indication that could exert contaminant oxidative stress responses. The present field studies suggest that elevated antioxidant content in H. musciformis offer sufficient relief to sustain against harsh environmental stresses for its colonization in the rocky intertidal zone.

  2. Implications of clinical trial design on sample size requirements.

    PubMed

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  3. Identifying natural flow regimes using fish communities

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai, Wen-Ping; Wu, Tzu-Ching; Chen, Hung-kwai; Herricks, Edwin E.

    2011-10-01

    SummaryModern water resources management has adopted natural flow regimes as reasonable targets for river restoration and conservation. The characterization of a natural flow regime begins with the development of hydrologic statistics from flow records. However, little guidance exists for defining the period of record needed for regime determination. In Taiwan, the Taiwan Eco-hydrological Indicator System (TEIS), a group of hydrologic statistics selected for fisheries relevance, is being used to evaluate ecological flows. The TEIS consists of a group of hydrologic statistics selected to characterize the relationships between flow and the life history of indigenous species. Using the TEIS and biosurvey data for Taiwan, this paper identifies the length of hydrologic record sufficient for natural flow regime characterization. To define the ecological hydrology of fish communities, this study connected hydrologic statistics to fish communities by using methods to define antecedent conditions that influence existing community composition. A moving average method was applied to TEIS statistics to reflect the effects of antecedent flow condition and a point-biserial correlation method was used to relate fisheries collections with TEIS statistics. The resulting fish species-TEIS (FISH-TEIS) hydrologic statistics matrix takes full advantage of historical flows and fisheries data. The analysis indicates that, in the watersheds analyzed, averaging TEIS statistics for the present year and 3 years prior to the sampling date, termed MA(4), is sufficient to develop a natural flow regime. This result suggests that flow regimes based on hydrologic statistics for the period of record can be replaced by regimes developed for sampled fish communities.

  4. 49 CFR 40.263 - What happens when an employee is unable to provide a sufficient amount of saliva for an alcohol...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... sufficient amount of saliva for an alcohol screening test? (a) As the STT, you must take the following steps if an employee is unable to provide sufficient saliva to complete a test on a saliva screening device (e.g., the employee does not provide sufficient saliva to activate the device). (1) You must conduct...

  5. High Agreement and High Prevalence: The Paradox of Cohen's Kappa.

    PubMed

    Zec, Slavica; Soriani, Nicola; Comoretto, Rosanna; Baldi, Ileana

    2017-01-01

    Cohen's Kappa is the most used agreement statistic in literature. However, under certain conditions, it is affected by a paradox which returns biased estimates of the statistic itself. The aim of the study is to provide sufficient information which allows the reader to make an informed choice of the correct agreement measure, by underlining some optimal properties of Gwet's AC1 in comparison to Cohen's Kappa, using a real data example. During the process of literature review, we have asked a panel of three evaluators to come up with a judgment on the quality of 57 randomized controlled trials assigning a score to each trial using the Jadad scale. The quality was evaluated according to the following dimensions: adopted design, randomization unit, type of primary endpoint. With respect to each of the above described features, the agreement between the three evaluators has been calculated using Cohen's Kappa statistic and Gwet's AC1 statistic and, finally, the values have been compared with the observed agreement. The values of the Cohen's Kappa statistic would lead to believe that the agreement levels for the variables Unit, Design and Primary Endpoints are totally unsatisfactory. The AC1 statistic, on the contrary, shows plausible values which are in line with the respective values of the observed concordance. We conclude that it would always be appropriate to adopt the AC1 statistic, thus bypassing any risk of incurring the paradox and drawing wrong conclusions about the results of agreement analysis.

  6. Which Variables Associated with Data-Driven Instruction Are Believed to Best Predict Urban Student Achievement?

    ERIC Educational Resources Information Center

    Greer, Wil

    2013-01-01

    This study identified the variables associated with data-driven instruction (DDI) that are perceived to best predict student achievement. Of the DDI variables discussed in the literature, 51 of them had a sufficient enough research base to warrant statistical analysis. Of them, 26 were statistically significant. Multiple regression and an…

  7. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  8. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  9. Alignment-free sequence comparison (II): theoretical power of comparison statistics.

    PubMed

    Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S

    2010-11-01

    Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.

  10. Wind speed statistics for Goldstone, California, anemometer sites

    NASA Technical Reports Server (NTRS)

    Berg, M.; Levy, R.; Mcginness, H.; Strain, D.

    1981-01-01

    An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.

  11. Voids and constraints on nonlinear clustering of galaxies

    NASA Technical Reports Server (NTRS)

    Vogeley, Michael S.; Geller, Margaret J.; Park, Changbom; Huchra, John P.

    1994-01-01

    Void statistics of the galaxy distribution in the Center for Astrophysics Redshift Survey provide strong constraints on galaxy clustering in the nonlinear regime, i.e., on scales R equal to or less than 10/h Mpc. Computation of high-order moments of the galaxy distribution requires a sample that (1) densely traces the large-scale structure and (2) covers sufficient volume to obtain good statistics. The CfA redshift survey densely samples structure on scales equal to or less than 10/h Mpc and has sufficient depth and angular coverage to approach a fair sample on these scales. In the nonlinear regime, the void probability function (VPF) for CfA samples exhibits apparent agreement with hierarchical scaling (such scaling implies that the N-point correlation functions for N greater than 2 depend only on pairwise products of the two-point function xi(r)) However, simulations of cosmological models show that this scaling in redshift space does not necessarily imply such scaling in real space, even in the nonlinear regime; peculiar velocities cause distortions which can yield erroneous agreement with hierarchical scaling. The underdensity probability measures the frequency of 'voids' with density rho less than 0.2 -/rho. This statistic reveals a paucity of very bright galaxies (L greater than L asterisk) in the 'voids.' Underdensities are equal to or greater than 2 sigma more frequent in bright galaxy samples than in samples that include fainter galaxies. Comparison of void statistics of CfA samples with simulations of a range of cosmological models favors models with Gaussian primordial fluctuations and Cold Dark Matter (CDM)-like initial power spectra. Biased models tend to produce voids that are too empty. We also compare these data with three specific models of the Cold Dark Matter cosmogony: an unbiased, open universe CDM model (omega = 0.4, h = 0.5) provides a good match to the VPF of the CfA samples. Biasing of the galaxy distribution in the 'standard' CDM model (omega = 1, b = 1.5; see below for definitions) and nonzero cosmological constant CDM model (omega = 0.4, h = 0.6 lambda(sub 0) = 0.6, b = 1.3) produce voids that are too empty. All three simulations match the observed VPF and underdensity probability for samples of very bright (M less than M asterisk = -19.2) galaxies, but produce voids that are too empty when compared with samples that include fainter galaxies.

  12. Integrated Cognitive-neuroscience Architectures for Understanding Sensemaking (ICArUS): A Computational Basis for ICArUS Challenge Problem Design

    DTIC Science & Technology

    2014-11-01

    Kullback , S., & Leibler , R. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79...cognitive challenges of sensemaking only informally using conceptual notions like "framing" and "re-framing", which are not sufficient to support T&E in...appropriate frame(s) from memory. Assess the Frame: Evaluate the quality of fit between data and frame. Generate Hypotheses: Use the current

  13. Much ado about two: reconsidering retransformation and the two-part model in health econometrics.

    PubMed

    Mullahy, J

    1998-06-01

    In health economics applications involving outcomes (y) and covariates (x), it is often the case that the central inferential problems of interest involve E[y/x] and its associated partial effects or elasticities. Many such outcomes have two fundamental statistical properties: y > or = 0; and the outcome y = 0 is observed with sufficient frequency that the zeros cannot be ignored econometrically. This paper (1) describes circumstances where the standard two-part model with homoskedastic retransformation will fail to provide consistent inferences about important policy parameters; and (2) demonstrates some alternative approaches that are likely to prove helpful in applications.

  14. The Clinical Ethnographic Interview: A user-friendly guide to the cultural formulation of distress and help seeking

    PubMed Central

    Arnault, Denise Saint; Shimabukuro, Shizuka

    2013-01-01

    Transcultural nursing, psychiatry, and medical anthropology have theorized that practitioners and researchers need more flexible instruments to gather culturally relevant illness experience, meaning, and help seeking. The state of the science is sufficiently developed to allow standardized yet ethnographically sound protocols for assessment. However, vigorous calls for culturally adapted assessment models have yielded little real change in routine practice. This paper describes the conversion of the Diagnostic and Statistical Manual IV, Appendix I Outline for Cultural Formulation into a user-friendly Clinical Ethnographic Interview (CEI), and provides clinical examples of its use in a sample of highly distressed Japanese women. PMID:22194348

  15. Base-flow characteristics of streams in the Valley and Ridge, the Blue Ridge, and the Piedmont physiographic provinces of Virginia

    USGS Publications Warehouse

    Nelms, David L.; Harlow, George E.; Hayes, Donald C.

    1997-01-01

    Growth within the Valley and Ridge, Blue Ridge, and Piedmont physiographic provinces of Virginia has focused concern about allocation of surface-water flow and increased demands on the ground-water resources. Potential surface-water yield was determined from statistical analysis of base-flow characteristics of streams. Base-flow characteristics also may provide a relative indication of the potential ground-water yield for areas that lack sufficient specific capacity or will-yield data; however, other factors need to be considered, such as geologic structure, lithology, precipitation, relief, and the degree of hydraulic interconnection between the regolith and bedrock.

  16. Evaluating sufficient similarity for drinking-water disinfection by-product (DBP) mixtures with bootstrap hypothesis test procedures.

    PubMed

    Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn

    2009-01-01

    In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.

  17. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  18. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  19. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  20. Statistics and Informatics in Space Astrophysics

    NASA Astrophysics Data System (ADS)

    Feigelson, E.

    2017-12-01

    The interest in statistical and computational methodology has seen rapid growth in space-based astrophysics, parallel to the growth seen in Earth remote sensing. There is widespread agreement that scientific interpretation of the cosmic microwave background, discovery of exoplanets, and classifying multiwavelength surveys is too complex to be accomplished with traditional techniques. NASA operates several well-functioning Science Archive Research Centers providing 0.5 PBy datasets to the research community. These databases are integrated with full-text journal articles in the NASA Astrophysics Data System (200K pageviews/day). Data products use interoperable formats and protocols established by the International Virtual Observatory Alliance. NASA supercomputers also support complex astrophysical models of systems such as accretion disks and planet formation. Academic researcher interest in methodology has significantly grown in areas such as Bayesian inference and machine learning, and statistical research is underway to treat problems such as irregularly spaced time series and astrophysical model uncertainties. Several scholarly societies have created interest groups in astrostatistics and astroinformatics. Improvements are needed on several fronts. Community education in advanced methodology is not sufficiently rapid to meet the research needs. Statistical procedures within NASA science analysis software are sometimes not optimal, and pipeline development may not use modern software engineering techniques. NASA offers few grant opportunities supporting research in astroinformatics and astrostatistics.

  1. Recruitment of Older Adults: Success May Be in the Details.

    PubMed

    McHenry, Judith C; Insel, Kathleen C; Einstein, Gilles O; Vidrine, Amy N; Koerner, Kari M; Morrow, Daniel G

    2015-10-01

    Describe recruitment strategies used in a randomized clinical trial of a behavioral prospective memory intervention to improve medication adherence for older adults taking antihypertensive medication. Recruitment strategies represent 4 themes: accessing an appropriate population, communication and trust-building, providing comfort and security, and expressing gratitude. Recruitment activities resulted in 276 participants with a mean age of 76.32 years, and study enrollment included 207 women, 69 men, and 54 persons representing ethnic minorities. Recruitment success was linked to cultivating relationships with community-based organizations, face-to-face contact with potential study participants, and providing service (e.g., blood pressure checks) as an access point to eligible participants. Seventy-two percent of potential participants who completed a follow-up call and met eligibility criteria were enrolled in the study. The attrition rate was 14.34%. The projected increase in the number of older adults intensifies the need to study interventions that improve health outcomes. The challenge is to recruit sufficient numbers of participants who are also representative of older adults to test these interventions. Failing to recruit a sufficient and representative sample can compromise statistical power and the generalizability of study findings. © The Author 2012. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. “Plateau”-related summary statistics are uninformative for comparing working memory models

    PubMed Central

    van den Berg, Ronald; Ma, Wei Ji

    2014-01-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon. Zhang and Luck (2008) and Anderson, Vogel, and Awh (2011) noticed that as more items need to be remembered, “memory noise” seems to first increase and then reach a “stable plateau.” They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided, at most, 0.15% of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99% correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. At realistic numbers of trials, plateau-related summary statistics are completely unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (2011), we found that the evidence in the summary statistics was, at most, 0.12% of the evidence in the raw data and far too weak to warrant any conclusions. These findings call into question claims about working memory that are based on summary statistics. PMID:24719235

  3. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  4. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    ERIC Educational Resources Information Center

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  5. The relative effects of habitat loss and fragmentation on population genetic variation in the red-cockaded woodpecker (Picoides borealis).

    PubMed

    Bruggeman, Douglas J; Wiegand, Thorsten; Fernández, Néstor

    2010-09-01

    The relative influence of habitat loss, fragmentation and matrix heterogeneity on the viability of populations is a critical area of conservation research that remains unresolved. Using simulation modelling, we provide an analysis of the influence both patch size and patch isolation have on abundance, effective population size (N(e)) and F(ST). An individual-based, spatially explicit population model based on 15 years of field work on the red-cockaded woodpecker (Picoides borealis) was applied to different landscape configurations. The variation in landscape patterns was summarized using spatial statistics based on O-ring statistics. By regressing demographic and genetics attributes that emerged across the landscape treatments against proportion of total habitat and O-ring statistics, we show that O-ring statistics provide an explicit link between population processes, habitat area, and critical thresholds of fragmentation that affect those processes. Spatial distances among land cover classes that affect biological processes translated into critical scales at which the measures of landscape structure correlated best with genetic indices. Therefore our study infers pattern from process, which contrasts with past studies of landscape genetics. We found that population genetic structure was more strongly affected by fragmentation than population size, which suggests that examining only population size may limit recognition of fragmentation effects that erode genetic variation. If effective population size is used to set recovery goals for endangered species, then habitat fragmentation effects may be sufficiently strong to prevent evaluation of recovery based on the ratio of census:effective population size alone.

  6. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    PubMed

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  7. Characterization of palmprints by wavelet signatures via directional context modeling.

    PubMed

    Zhang, Lei; Zhang, David

    2004-06-01

    The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.

  8. Dose coverage calculation using a statistical shape model—applied to cervical cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; van de Schoot, Agustinus J. A. J.; Grusell, Erik; Bel, Arjan; Ahnesjö, Anders

    2017-05-01

    A comprehensive methodology for treatment simulation and evaluation of dose coverage probabilities is presented where a population based statistical shape model (SSM) provide samples of fraction specific patient geometry deformations. The learning data consists of vector fields from deformable image registration of repeated imaging giving intra-patient deformations which are mapped to an average patient serving as a common frame of reference. The SSM is created by extracting the most dominating eigenmodes through principal component analysis of the deformations from all patients. The sampling of a deformation is thus reduced to sampling weights for enough of the most dominating eigenmodes that describe the deformations. For the cervical cancer patient datasets in this work, we found seven eigenmodes to be sufficient to capture 90% of the variance in the deformations of the, and only three eigenmodes for stability in the simulated dose coverage probabilities. The normality assumption of the eigenmode weights was tested and found relevant for the 20 most dominating eigenmodes except for the first. Individualization of the SSM is demonstrated to be improved using two deformation samples from a new patient. The probabilistic evaluation provided additional information about the trade-offs compared to the conventional single dataset treatment planning.

  9. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  10. Power analysis as a tool to identify statistically informative indicators for monitoring coral reef disturbances.

    PubMed

    Van Wynsberge, Simon; Gilbert, Antoine; Guillemot, Nicolas; Heintz, Tom; Tremblay-Boyer, Laura

    2017-07-01

    Extensive biological field surveys are costly and time consuming. To optimize sampling and ensure regular monitoring on the long term, identifying informative indicators of anthropogenic disturbances is a priority. In this study, we used 1800 candidate indicators by combining metrics measured from coral, fish, and macro-invertebrate assemblages surveyed from 2006 to 2012 in the vicinity of an ongoing mining project in the Voh-Koné-Pouembout lagoon, New Caledonia. We performed a power analysis to identify a subset of indicators which would best discriminate temporal changes due to a simulated chronic anthropogenic impact. Only 4% of tested indicators were likely to detect a 10% annual decrease of values with sufficient power (>0.80). Corals generally exerted higher statistical power than macro-invertebrates and fishes because of lower natural variability and higher occurrence. For the same reasons, higher taxonomic ranks provided higher power than lower taxonomic ranks. Nevertheless, a number of families of common sedentary or sessile macro-invertebrates and fishes also performed well in detecting changes: Echinometridae, Isognomidae, Muricidae, Tridacninae, Arcidae, and Turbinidae for macro-invertebrates and Pomacentridae, Labridae, and Chaetodontidae for fishes. Interestingly, these families did not provide high power in all geomorphological strata, suggesting that the ability of indicators in detecting anthropogenic impacts was closely linked to reef geomorphology. This study provides a first operational step toward identifying statistically relevant indicators of anthropogenic disturbances in New Caledonia's coral reefs, which can be useful in similar tropical reef ecosystems where little information is available regarding the responses of ecological indicators to anthropogenic disturbances.

  11. Scaling properties of multiscale equilibration

    NASA Astrophysics Data System (ADS)

    Detmold, W.; Endres, M. G.

    2018-04-01

    We investigate the lattice spacing dependence of the equilibration time for a recently proposed multiscale thermalization algorithm for Markov chain Monte Carlo simulations. The algorithm uses a renormalization-group matched coarse lattice action and prolongation operation to rapidly thermalize decorrelated initial configurations for evolution using a corresponding target lattice action defined at a finer scale. Focusing on nontopological long-distance observables in pure S U (3 ) gauge theory, we provide quantitative evidence that the slow modes of the Markov process, which provide the dominant contribution to the rethermalization time, have a suppressed contribution toward the continuum limit, despite their associated timescales increasing. Based on these numerical investigations, we conjecture that the prolongation operation used herein will produce ensembles that are indistinguishable from the target fine-action distribution for a sufficiently fine coupling at a given level of statistical precision, thereby eliminating the cost of rethermalization.

  12. A psychometric evaluation of the digital logic concept inventory

    NASA Astrophysics Data System (ADS)

    Herman, Geoffrey L.; Zilles, Craig; Loui, Michael C.

    2014-10-01

    Concept inventories hold tremendous promise for promoting the rigorous evaluation of teaching methods that might remedy common student misconceptions and promote deep learning. The measurements from concept inventories can be trusted only if the concept inventories are evaluated both by expert feedback and statistical scrutiny (psychometric evaluation). Classical Test Theory and Item Response Theory provide two psychometric frameworks for evaluating the quality of assessment tools. We discuss how these theories can be applied to assessment tools generally and then apply them to the Digital Logic Concept Inventory (DLCI). We demonstrate that the DLCI is sufficiently reliable for research purposes when used in its entirety and as a post-course assessment of students' conceptual understanding of digital logic. The DLCI can also discriminate between students across a wide range of ability levels, providing the most information about weaker students' ability levels.

  13. Rasch analysis on OSCE data : An illustrative example.

    PubMed

    Tor, E; Steketee, C

    2011-01-01

    The Objective Structured Clinical Examination (OSCE) is a widely used tool for the assessment of clinical competence in health professional education. The goal of the OSCE is to make reproducible decisions on pass/fail status as well as students' levels of clinical competence according to their demonstrated abilities based on the scores. This paper explores the use of the polytomous Rasch model in evaluating the psychometric properties of OSCE scores through a case study. The authors analysed an OSCE data set (comprised of 11 stations) for 80 fourth year medical students based on the polytomous Rasch model in an effort to answer two research questions: (1) Do the clinical tasks assessed in the 11 OSCE stations map on to a common underlying construct, namely clinical competence? (2) What other insights can Rasch analysis offer in terms of scaling, item analysis and instrument validation over and above the conventional analysis based on classical test theory? The OSCE data set has demonstrated a sufficient degree of fit to the Rasch model (Χ(2) = 17.060, DF=22, p=0.76) indicating that the 11 OSCE station scores have sufficient psychometric properties to form a measure for a common underlying construct, i.e. clinical competence. Individual OSCE station scores with good fit to the Rasch model (p > 0.1 for all Χ(2) statistics) further corroborated the characteristic of unidimensionality of the OSCE scale for clinical competence. A Person Separation Index (PSI) of 0.704 indicates sufficient level of reliability for the OSCE scores. Other useful findings from the Rasch analysis that provide insights, over and above the analysis based on classical test theory, are also exemplified using the data set. The polytomous Rasch model provides a useful and supplementary approach to the calibration and analysis of OSCE examination data.

  14. The Effect of Clothing on the Rate of Decomposition and Diptera Colonization on Sus scrofa Carcasses.

    PubMed

    Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal

    2015-07-01

    Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427  = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations. © 2015 American Academy of Forensic Sciences.

  15. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  16. Phase Space Dissimilarity Measures for Structural Health Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bubacz, Jacob A; Chmielewski, Hana T; Pape, Alexander E

    A novel method for structural health monitoring (SHM), known as the Phase Space Dissimilarity Measures (PSDM) approach, is proposed and developed. The patented PSDM approach has already been developed and demonstrated for a variety of equipment and biomedical applications. Here, we investigate SHM of bridges via analysis of time serial accelerometer measurements. This work has four aspects. The first is algorithm scalability, which was found to scale linearly from one processing core to four cores. Second, the same data are analyzed to determine how the use of the PSDM approach affects sensor placement. We found that a relatively low-density placementmore » sufficiently captures the dynamics of the structure. Third, the same data are analyzed by unique combinations of accelerometer axes (vertical, longitudinal, and lateral with respect to the bridge) to determine how the choice of axes affects the analysis. The vertical axis is found to provide satisfactory SHM data. Fourth, statistical methods were investigated to validate the PSDM approach for this application, yielding statistically significant results.« less

  17. On information, negentropy and H-theorem

    NASA Astrophysics Data System (ADS)

    Chakrabarti, C. G.; Sarker, N. G.

    1983-09-01

    The paper deals with the imprtance of the Kullback descrimination information in the statistical characterization of negentropy of non-equilibrium state and the irreversibility of a classical dynamical system. The theory based on the Kullback discrimination information as the H-function gives new insight into the interrelation between the concepts of coarse-graining and the principle of sufficiency leading to important statistical characterization of thermal equilibrium of a closed system.

  18. 49 CFR 40.195 - What happens when an individual is unable to provide a sufficient amount of urine for a pre...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... provide a sufficient amount of urine for a pre-employment follow-up or return-to-duty test because of a... providing a sufficient specimen for a pre-employment follow-up or return-to-duty test and the condition... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests...

  19. 49 CFR 40.195 - What happens when an individual is unable to provide a sufficient amount of urine for a pre...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... provide a sufficient amount of urine for a pre-employment follow-up or return-to-duty test because of a... providing a sufficient specimen for a pre-employment follow-up or return-to-duty test and the condition... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests...

  20. 49 CFR 40.195 - What happens when an individual is unable to provide a sufficient amount of urine for a pre...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... provide a sufficient amount of urine for a pre-employment follow-up or return-to-duty test because of a... providing a sufficient specimen for a pre-employment follow-up or return-to-duty test and the condition... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests...

  1. 49 CFR 40.195 - What happens when an individual is unable to provide a sufficient amount of urine for a pre...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provide a sufficient amount of urine for a pre-employment follow-up or return-to-duty test because of a... providing a sufficient specimen for a pre-employment follow-up or return-to-duty test and the condition... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests...

  2. 49 CFR 40.195 - What happens when an individual is unable to provide a sufficient amount of urine for a pre...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... provide a sufficient amount of urine for a pre-employment follow-up or return-to-duty test because of a... providing a sufficient specimen for a pre-employment follow-up or return-to-duty test and the condition... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests...

  3. An absolute chronology for early Egypt using radiocarbon dating and Bayesian statistical modelling

    PubMed Central

    Dee, Michael; Wengrow, David; Shortland, Andrew; Stevenson, Alice; Brock, Fiona; Girdland Flink, Linus; Bronk Ramsey, Christopher

    2013-01-01

    The Egyptian state was formed prior to the existence of verifiable historical records. Conventional dates for its formation are based on the relative ordering of artefacts. This approach is no longer considered sufficient for cogent historical analysis. Here, we produce an absolute chronology for Early Egypt by combining radiocarbon and archaeological evidence within a Bayesian paradigm. Our data cover the full trajectory of Egyptian state formation and indicate that the process occurred more rapidly than previously thought. We provide a timeline for the First Dynasty of Egypt of generational-scale resolution that concurs with prevailing archaeological analysis and produce a chronometric date for the foundation of Egypt that distinguishes between historical estimates. PMID:24204188

  4. Vocational students' learning preferences: the interpretability of ipsative data.

    PubMed

    Smith, P J

    2000-02-01

    A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.

  5. An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chien, T. T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.

  6. Protein abundances can distinguish between naturally-occurring and laboratory strains of Yersinia pestis, the causative agent of plague

    DOE PAGES

    Merkley, Eric D.; Sego, Landon H.; Lin, Andy; ...

    2017-08-30

    Adaptive processes in bacterial species can occur rapidly in laboratory culture, leading to genetic divergence between naturally occurring and laboratory-adapted strains. Differentiating wild and closely-related laboratory strains is clearly important for biodefense and bioforensics; however, DNA sequence data alone has thus far not provided a clear signature, perhaps due to lack of understanding of how diverse genome changes lead to adapted phenotypes. Protein abundance profiles from mass spectrometry-based proteomics analyses are a molecular measure of phenotype. Proteomics data contains sufficient information that powerful statistical methods can uncover signatures that distinguish wild strains of Yersinia pestis from laboratory-adapted strains.

  7. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.

    PubMed

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M

    2016-07-14

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  8. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis

    NASA Astrophysics Data System (ADS)

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.

    2016-07-01

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  9. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations

    PubMed Central

    Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.

    2017-01-01

    A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889

  10. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    PubMed

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.

  11. Sufficient condition for a finite-time singularity in a high-symmetry Euler flow: Analysis and statistics

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    1996-08-01

    A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.

  12. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  13. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  14. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  15. 49 CFR 40.193 - What happens when an employee does not provide a sufficient amount of urine for a drug test?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... sufficient amount of urine for a drug test? 40.193 Section 40.193 Transportation Office of the Secretary of... § 40.193 What happens when an employee does not provide a sufficient amount of urine for a drug test... sufficient amount of urine to permit a drug test (i.e., 45 mL of urine). (b) As the collector, you must do...

  16. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    PubMed

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  17. A novel approach for choosing summary statistics in approximate Bayesian computation.

    PubMed

    Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas

    2012-11-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.

  18. A Novel Approach for Choosing Summary Statistics in Approximate Bayesian Computation

    PubMed Central

    Aeschbacher, Simon; Beaumont, Mark A.; Futschik, Andreas

    2012-01-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θanc = 4Neu) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L2-loss performs best. Applying that method to the ibex data, we estimate θ^anc≈1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10−4 and 3.5 × 10−3 per locus per generation. The proportion of males with access to matings is estimated as ω^≈0.21, which is in good agreement with recent independent estimates. PMID:22960215

  19. Intelligence Failure: How a Commander Can Prevent It

    DTIC Science & Technology

    2009-10-23

    Failure: How a Commander Can Prevent It The job of intelligence is to provide the decision maker with sufficient understanding of the enemy to make...Failure: How a Commander Can Prevent It The job of intelligence is to provide the decision maker with sufficient understanding of the enemy to make...reinforce these lessons. 1 Introduction The job of intelligence is to provide the decision maker with sufficient understanding of

  20. Determinants of 25(OH)D sufficiency in obese minority children: selecting outcome measures and analytic approaches.

    PubMed

    Zhou, Ping; Schechter, Clyde; Cai, Ziyong; Markowitz, Morri

    2011-06-01

    To highlight complexities in defining vitamin D sufficiency in children. Serum 25-(OH) vitamin D [25(OH)D] levels from 140 healthy obese children age 6 to 21 years living in the inner city were compared with multiple health outcome measures, including bone biomarkers and cardiovascular risk factors. Several statistical analytic approaches were used, including Pearson correlation, analysis of covariance (ANCOVA), and "hockey stick" regression modeling. Potential threshold levels for vitamin D sufficiency varied by outcome variable and analytic approach. Only systolic blood pressure (SBP) was significantly correlated with 25(OH)D (r = -0.261; P = .038). ANCOVA revealed that SBP and triglyceride levels were statistically significant in the test groups [25(OH)D <10, <15 and <20 ng/mL] compared with the reference group [25(OH)D >25 ng/mL]. ANCOVA also showed that only children with severe vitamin D deficiency [25(OH)D <10 ng/mL] had significantly higher parathyroid hormone levels (Δ = 15; P = .0334). Hockey stick model regression analyses found evidence of a threshold level in SBP, with a 25(OH)D breakpoint of 27 ng/mL, along with a 25(OH)D breakpoint of 18 ng/mL for triglycerides, but no relationship between 25(OH)D and parathyroid hormone. Defining vitamin D sufficiency should take into account different vitamin D-related health outcome measures and analytic methodologies. Copyright © 2011 Mosby, Inc. All rights reserved.

  1. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    PubMed

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  2. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  3. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    PubMed Central

    Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-01-01

    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695

  4. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  5. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  6. Evaluation of oral hygiene products: science is true; don't be misled by the facts.

    PubMed

    Addy, M; Moran, J M

    1997-10-01

    Most people in industrialized countries use oral hygiene products. When an oral health benefit is expected, it is important that sufficient scientific evidence exist to support such claims. Ideally, data should be cumulative derived from studies in vitro and in vivo. The data should be available to the profession for evaluation by publication in refereed scientific journals. Terms and phrases require clarification, and claims made by implication or derived by inference must be avoided. Similarity in products is not necessarily proof per se of efficacy. Studies in vitro and in vivo should follow the basic principles of scientific research. Studies must be ethical, avoid bias and be suitably controlled. A choice of controls will vary depending on whether an agent or a whole product is evaluated and the development stage of a formulation. Where appropriate, new products should be compared with products already available and used by the general public. Conformity with the guidelines for good clinical practice appears to be a useful way of validating studies and a valuable guide to the profession. Studies should be designed with sufficient power to detect statistically significant differences if these exist. However, consideration must be given to the clinical significance of statistically significant differences between formulations since these are not necessarily the same. Studies in vitro provide supportive data but extrapolation to clinical effect is difficult and even misleading, and such data should not stand alone as proof of efficacy of a product. Short-term studies in vivo provide useful information, particularly at the development stage. Ideally, however, products should be proved effective when used in the circumstances for which they are developed. Nevertheless, a variety of variable influence the outcome of home-use studies, and the influence of the variable cannot usually be calculated. Although rarely considered, the cost-benefit ratio of some oral hygiene products needs to be considered.

  7. An evaluation of the costs and consequences of Children Community Nursing teams.

    PubMed

    Hinde, Sebastian; Allgar, Victoria; Richardson, Gerry; Spiers, Gemma; Parker, Gillian; Birks, Yvonne

    2017-08-01

    Recent years have seen an increasing shift towards providing care in the community, epitomised by the role of Children's Community Nursing (CCN) teams. However, there have been few attempts to use robust evaluative methods to interrogate the impact of such services. This study sought to evaluate whether reduction in secondary care costs, resulting from the introduction of 2 CCN teams, was sufficient to offset the additional cost of commissioning. Among the potential benefits of the CCN teams is a reduction in the burden placed on secondary care through the delivery of care at home; it is this potential reduction which is evaluated in this study via a 2-part analytical method. Firstly, an interrupted time series analysis used Hospital Episode Statistics data to interrogate any change in total paediatric bed days as a result of the introduction of 2 teams. Secondly, a costing analysis compared the cost savings from any reduction in total bed days with the cost of commissioning the teams. This study used a retrospective longitudinal study design as part of the transforming children's community services trial, which was conducted between June 2012 and June 2015. A reduction in hospital activity after introduction of the 2 nursing teams was found, (9634 and 8969 fewer bed days), but this did not reach statistical significance. The resultant cost saving to the National Health Service was less than the cost of employing the teams. The study represents an important first step in understanding the role of such teams as a means of providing a high quality of paediatric care in an era of limited resource. While the cost saving from released paediatric bed days was not sufficient to demonstrate cost-effectiveness, the analysis does not incorporate wider measures of health care utilisation and nonmonetary benefits resulting from the CCN teams. © 2017 John Wiley & Sons, Ltd.

  8. Contribution of Apollo lunar photography to the establishment of selenodetic control

    NASA Technical Reports Server (NTRS)

    Dermanis, A.

    1975-01-01

    Among the various types of available data relevant to the establishment of geometric control on the moon, the only one covering significant portions of the lunar surface (20%) with sufficient information content, is lunar photography, taken at the proximity of the moon from lunar orbiters. The idea of free geodetic networks is introduced as a tool for the statistical comparison of the geometric aspects of the various data used. Methods were developed for the updating of the statistics of observations and the a priori parameter estimates to obtain statistically consistent solutions by means of the optimum relative weighting concept.

  9. Bayesian Hierarchical Random Effects Models in Forensic Science.

    PubMed

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  10. Determining a patient's comfort in inquiring about healthcare providers' hand-washing behavior.

    PubMed

    Clare, Camille A; Afzal, Omara; Knapp, Kenneth; Viola, Deborah

    2013-06-01

    To determine whether a patient's level of assertiveness and other factors influences her comfort level in asking her provider to wash his or her hands. In this pilot study, we developed a survey to gather cross-sectional information on a variety of factors that might explain patient willingness to ask her health-care provider to wash his or her hands. Three primary predictor variables are analyzed: (1) patient assertiveness; (2) patient familiarity with her healthcare provider; and (3) whether the patient has observed hand-washing behavior. Fifty patients participated from the Obstetrics and Gynecology Department of Metropolitan Hospital Center. Less assertive patients are much less likely than assertive patients to ask physicians to wash hands (25% versus 68%; Fisher's exact test P = 0.0427). Among the 3 assertiveness questions included in the survey, the ability to ask physicians questions during visits is most strongly indicative of willingness to ask about hand washing. Familiarity with the names of regular health-care providers has a statistically significant impact on willingness to ask about hand washing. Evidence suggests that observing hand washing behavior affects willingness to ask, but differences are not statistically significant. Results by socioeconomic status such as age, education, income, and race/ethnicity are inconclusive. A patient's level of assertiveness alone is not sufficient to determine her willingness to inquire about the hand-washing behavior of her provider. A high percentage of patients did not see their provider engaging in adequate hand washing behavior. If patients feel comfortable with their provider to inquire about their care and request hand-washing behavior, health outcomes are affected by reducing the rates of health care-associated infections.

  11. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution

    PubMed Central

    Moretti, Stefano; van Leeuwen, Danitsja; Gmuender, Hans; Bonassi, Stefano; van Delft, Joost; Kleinjans, Jos; Patrone, Fioravante; Merlo, Domenico Franco

    2008-01-01

    Background In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. Results In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. Conclusion CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that resulted in a selection of genes with a potential impact in the regulation of complex pathways. PMID:18764936

  12. [Lymphocytic infiltration in uveal melanoma].

    PubMed

    Sach, J; Kocur, J

    1993-11-01

    After our observation of lymphocytic infiltration in uveal melanomas we present theoretical review to this interesting topic. Due to relatively low incidence of this feature we haven't got sufficiently large collection of cases for presentation of our statistically significant conclusions.

  13. How Statisticians Speak Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redus, K.S.

    2007-07-01

    The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less

  14. POET: POlarimeters for Energetic Transients

    NASA Technical Reports Server (NTRS)

    Hill, J. E.; McConnell, M. L.; Bloser, P.; Legere, J.; Macri, J.; Ryan, J.; Barthelmy, S.; Angelini, L.; Sakamoto, T.; Black, J. K.; hide

    2008-01-01

    POET (Polarimeters for Energetic Transients) is a Small Explorer mission concept proposed to NASA in January 2008. The principal scientific goal of POET is to measure GRB polarization between 2 and 500 keV. The payload consists of two wide FoV instruments: a Low Energy Polarimeter (LEP) capable of polarization measurements in the energy range from 2-15 keV and a high energy polarimeter (Gamma-Ray Polarimeter Experiment - GRAPE) that will measure polarization in the 60-500 keV energy range. Spectra will be measured from 2 keV up to 1 MeV. The POET spacecraft provides a zenith-pointed platform for maximizing the exposure to deep space. Spacecraft rotation will provide a means of effectively dealing with systematics in the polarization response. POET will provide sufficient sensitivity and sky coverage to measure statistically significant polarization for up to 100 GRBs in a two-year mission. Polarization data will also be obtained for solar flares, pulsars and other sources of astronomical interest.

  15. PoET: Polarimeters for Energetic Transients

    NASA Technical Reports Server (NTRS)

    McConnell, Mark; Barthelmy, Scott; Hill, Joanne

    2008-01-01

    This presentation focuses on PoET (Polarimeters for Energetic Transients): a Small Explorer mission concept proposed to NASA in January 2008. The principal scientific goal of POET is to measure GRB polarization between 2 and 500 keV. The payload consists of two wide FoV instruments: a Low Energy Polarimeter (LEP) capable of polarization measurements in the energy range from 2-15 keV and a high energy polarimeter (Gamma-Ray Polarimeter Experiment - GRAPE) that will measure polarization in the 60-500 keV energy range. Spectra will be measured from 2 keV up to 1 MeV. The PoET spacecraft provides a zenith-pointed platform for maximizing the exposure to deep space. Spacecraft rotation will provide a means of effectively dealing with systematics in the polarization response. PoET will provide sufficient sensitivity and sky coverage to measure statistically significant polarization for up to 100 GRBs in a two-year mission. Polarization data will also be obtained for solar flares, pulsars and other sources of astronomical interest.

  16. Specious causal attributions in the social sciences: the reformulated stepping-stone theory of heroin use as exemplar.

    PubMed

    Baumrind, D

    1983-12-01

    The claims based on causal models employing either statistical or experimental controls are examined and found to be excessive when applied to social or behavioral science data. An exemplary case, in which strong causal claims are made on the basis of a weak version of the regularity model of cause, is critiqued. O'Donnell and Clayton claim that in order to establish that marijuana use is a cause of heroin use (their "reformulated stepping-stone" hypothesis), it is necessary and sufficient to demonstrate that marijuana use precedes heroin use and that the statistically significant association between the two does not vanish when the effects of other variables deemed to be prior to both of them are removed. I argue that O'Donnell and Clayton's version of the regularity model is not sufficient to establish cause and that the planning of social interventions both presumes and requires a generative rather than a regularity causal model. Causal modeling using statistical controls is of value when it compels the investigator to make explicit and to justify a causal explanation but not when it is offered as a substitute for a generative analysis of causal connection.

  17. Product plots.

    PubMed

    Wickham, Hadley; Hofmann, Heike

    2011-12-01

    We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE

  18. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. The role of reference in cross-situational word learning.

    PubMed

    Wang, Felix Hao; Mintz, Toben H

    2018-01-01

    Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects' prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words' meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children's language acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Dental health care providers' views on child physical abuse in Malaysia.

    PubMed

    Hussein, A S; Ahmad, R; Ibrahim, N; Yusoff, A; Ahmad, D

    2016-10-01

    To assess the knowledge, attitudes and experience of a group of Malaysian dental health care providers regarding child physical abuse (CPA) cases in terms of frequency of occurrence, diagnosis, risk factors and reporting. A questionnaire was distributed to all dental health care providers attending a national paediatric dentistry conference in Kuantan, Malaysia, and demographical variables, knowledge, attitudes and experience about CPA, risk factors and the reasons for not reporting abuse cases were collected. Descriptive statistics and bivariance analysis were performed. A 5 % level of statistical significance was applied for the analyses (p ≤ 0.05). The response rate was 74.7 %. Half of the respondents (52.8 %) stated that the frequency of occurrence of CPA is common in Malaysia. Full agreement between dental health care providers was not determined concerning the identification of signs of CPA and its risk factors. Although 83.3 % were aware that reporting CPA is a legal requirement in Malaysia, only 14.8 % have reported such cases. Lack of adequate history was the main reason for not reporting. Virtually two-thirds of the respondents (62 %) indicated that they had not received sufficient information about CPA and were willing to be educated on how to diagnose and report child abuse cases (81.5, 78.7 %, respectively). There were considerable disparities in respondents' knowledge and attitudes regarding the occurrence, signs of suspected cases, risk factors and reporting of CPA. Despite being aware of such cases, only a handful was reported. Enhancement in the education of Malaysian dental health care providers on recognising and reporting CPA is recommended.

  1. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  2. Validity criteria for Fermi's golden rule scattering rates applied to metallic nanowires.

    PubMed

    Moors, Kristof; Sorée, Bart; Magnus, Wim

    2016-09-14

    Fermi's golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.

  3. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  4. A model of the human observer and decision maker

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1981-01-01

    The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.

  5. M.S.L.A.P. Modular Spectral Line Analysis Program documentation

    NASA Technical Reports Server (NTRS)

    Joseph, Charles L.; Jenkins, Edward B.

    1991-01-01

    MSLAP is a software for analyzing spectra, providing the basic structure to identify spectral features, to make quantitative measurements of this features, and to store the measurements for convenient access. MSLAP can be used to measure not only the zeroth moment (equivalent width) of a profile, but also the first and second moments. Optical depths and the corresponding column densities across the profile can be measured as well for sufficiently high resolution data. The software was developed for an interactive, graphical analysis where the computer carries most of the computational and data organizational burden and the investigator is responsible only for all judgement decisions. It employs sophisticated statistical techniques for determining the best polynomial fit to the continuum and for calculating the uncertainties.

  6. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  7. The impact of capacity growth in national telecommunications networks.

    PubMed

    Lord, Andrew; Soppera, Andrea; Jacquet, Arnaud

    2016-03-06

    This paper discusses both UK-based and global Internet data bandwidth growth, beginning with historical data for the BT network. We examine the time variations in consumer behaviour and how this is statistically aggregated into larger traffic loads on national core fibre communications networks. The random nature of consumer Internet behaviour, where very few consumers require maximum bandwidth simultaneously, provides the opportunity for a significant statistical gain. The paper looks at predictions for how this growth might continue over the next 10-20 years, giving estimates for the amount of bandwidth that networks should support in the future. The paper then explains how national networks are designed to accommodate these traffic levels, and the various network roles, including access, metro and core, are described. The physical layer network is put into the context of how the packet and service layers are designed and the applications and location of content are also included in an overall network overview. The specific role of content servers in alleviating core network traffic loads is highlighted. The status of the relevant transmission technologies in the access, metro and core is given, showing that these technologies, with adequate research, should be sufficient to provide bandwidth for consumers in the next 10-20 years. © 2016 The Author(s).

  8. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  9. Space station integrated wall damage and penetration damage control. Task 5: Space debris measurement, mapping and characterization system

    NASA Technical Reports Server (NTRS)

    Lempriere, B. M.

    1987-01-01

    The procedures and results of a study of a conceptual system for measuring the debris environment on the space station is discussed. The study was conducted in two phases: the first consisted of experiments aimed at evaluating location of impact through panel response data collected from acoustic emission sensors; the second analyzed the available statistical description of the environment to determine the probability of the measurement system producing useful data, and analyzed the results of the previous tests to evaluate the accuracy of location and the feasibility of extracting impactor characteristics from the panel response. The conclusions were that for one panel the system would not be exposed to any event, but that the entire Logistics Module would provide a modest amount of data. The use of sensors with higher sensitivity than those used in the tests could be advantageous. The impact location could be found with sufficient accuracy from panel response data. The waveforms of the response were shown to contain information on the impact characteristics, but the data set did not span a sufficient range of the variables necessary to evaluate the feasibility of extracting the information.

  10. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Statistical Learning in a Natural Language by 8-Month-Old Infants

    PubMed Central

    Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.

    2013-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896

  12. Statistical learning in a natural language by 8-month-old infants.

    PubMed

    Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R

    2009-01-01

    Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.

  13. Statistical significance of trace evidence matches using independent physicochemical measurements

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George

    1997-02-01

    A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.

  14. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  15. Bi-PROF

    PubMed Central

    Gries, Jasmin; Schumacher, Dirk; Arand, Julia; Lutsik, Pavlo; Markelova, Maria Rivera; Fichtner, Iduna; Walter, Jörn; Sers, Christine; Tierling, Sascha

    2013-01-01

    The use of next generation sequencing has expanded our view on whole mammalian methylome patterns. In particular, it provides a genome-wide insight of local DNA methylation diversity at single nucleotide level and enables the examination of single chromosome sequence sections at a sufficient statistical power. We describe a bisulfite-based sequence profiling pipeline, Bi-PROF, which is based on the 454 GS-FLX Titanium technology that allows to obtain up to one million sequence stretches at single base pair resolution without laborious subcloning. To illustrate the performance of the experimental workflow connected to a bioinformatics program pipeline (BiQ Analyzer HT) we present a test analysis set of 68 different epigenetic marker regions (amplicons) in five individual patient-derived xenograft tissue samples of colorectal cancer and one healthy colon epithelium sample as a control. After the 454 GS-FLX Titanium run, sequence read processing and sample decoding, the obtained alignments are quality controlled and statistically evaluated. Comprehensive methylation pattern interpretation (profiling) assessed by analyzing 102-104 sequence reads per amplicon allows an unprecedented deep view on pattern formation and methylation marker heterogeneity in tissues concerned by complex diseases like cancer. PMID:23803588

  16. Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI

    NASA Astrophysics Data System (ADS)

    Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.

    2014-01-01

    A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.

  17. Using high-resolution variant frequencies to empower clinical genome interpretation.

    PubMed

    Whiffin, Nicola; Minikel, Eric; Walsh, Roddy; O'Donnell-Luria, Anne H; Karczewski, Konrad; Ing, Alexander Y; Barton, Paul J R; Funke, Birgit; Cook, Stuart A; MacArthur, Daniel; Ware, James S

    2017-10-01

    PurposeWhole-exome and whole-genome sequencing have transformed the discovery of genetic variants that cause human Mendelian disease, but discriminating pathogenic from benign variants remains a daunting challenge. Rarity is recognized as a necessary, although not sufficient, criterion for pathogenicity, but frequency cutoffs used in Mendelian analysis are often arbitrary and overly lenient. Recent very large reference datasets, such as the Exome Aggregation Consortium (ExAC), provide an unprecedented opportunity to obtain robust frequency estimates even for very rare variants.MethodsWe present a statistical framework for the frequency-based filtering of candidate disease-causing variants, accounting for disease prevalence, genetic and allelic heterogeneity, inheritance mode, penetrance, and sampling variance in reference datasets.ResultsUsing the example of cardiomyopathy, we show that our approach reduces by two-thirds the number of candidate variants under consideration in the average exome, without removing true pathogenic variants (false-positive rate<0.001).ConclusionWe outline a statistically robust framework for assessing whether a variant is "too common" to be causative for a Mendelian disorder of interest. We present precomputed allele frequency cutoffs for all variants in the ExAC dataset.

  18. Correlation and agreement: overview and clarification of competing concepts and measures.

    PubMed

    Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M

    2016-04-25

    Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.

  19. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  20. Reverse shoulder arthroplasty glenoid fixation: is there a benefit in using four instead of two screws?

    PubMed

    James, Jaison; Allison, Mari A; Werner, Frederick W; McBride, Devin E; Basu, Niladri N; Sutton, Levi G; Nanavati, Vipul N

    2013-08-01

    To allow osseous integration to occur and thus provide long-term stability, initial glenoid baseplate fixation must be sufficiently rigid. A major contributing factor to initial rigid fixation is baseplate screw fixation. Current baseplate designs use a 4-screw fixation construct. However, recent literature suggests adequate fixation can be achieved with fewer than 4 screws. The purpose of the present study was to determine whether a 4-screw construct provides more baseplate stability than a 2-screw construct. A flat-backed glenoid baseplate with 4 screw hole options was implanted into 6 matched pairs of cadaver scapulas using standard surgical technique. Within each pair, 2 screws or 4 screws were implanted in a randomized fashion. A glenosphere was attached allowing cyclic loading in an inferior-to-superior direction and in an anterior-to-posterior direction. Baseplate motion was measured using 4 linear voltage displacement transducers evenly spaced around the glenosphere. There was no statistical difference in the average peak central displacements between fixation with 2 or 4 screws (P = .338). Statistical increases in average peak central displacement with increasing load (P < .001) and with repetitive loading (P < .002) were found. This study demonstrates no statistical difference in baseplate motion between 2-screw and 4-screw constructs. Therefore, using fewer screws could potentially lead to a reduction in operative time, cost, and risk, with no significant negative effect on overall implant baseplate motion. Copyright © 2013 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  1. ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS

    EPA Science Inventory

    Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...

  2. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    PubMed

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  4. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  5. Innocent Until Proven Guilty

    ERIC Educational Resources Information Center

    Case, Catherine; Whitaker, Douglas

    2016-01-01

    In the criminal justice system, defendants accused of a crime are presumed innocent until proven guilty. Statistical inference in any context is built on an analogous principle: The null hypothesis--often a hypothesis of "no difference" or "no effect"--is presumed true unless there is sufficient evidence against it. In this…

  6. Spatial statistical network models for stream and river temperature in New England, USA

    EPA Science Inventory

    Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May–September growing ...

  7. Freeze-thaw stress of Alhydrogel ® alone is sufficient to reduce the immunogenicity of a recombinant hepatitis B vaccine containing native antigen.

    PubMed

    Clapp, Tanya; Munks, Michael W; Trivedi, Ruchit; Kompella, Uday B; Braun, LaToya Jones

    2014-06-24

    Preventing losses in vaccine potency due to accidental freezing has recently become a topic of interest for improving vaccines. All vaccines with aluminum-containing adjuvants are susceptible to such potency losses. Recent studies have described excipients that protect the antigen from freeze-induced inactivation, prevent adjuvant agglomeration and retain potency. Although these strategies have demonstrated success, they do not provide a mechanistic understanding of freeze-thaw (FT) induced potency losses. In the current study, we investigated how adjuvant frozen in the absence of antigen affects vaccine immunogenicity and whether preventing damage to the freeze-sensitive recombinant hepatitis B surface antigen (rHBsAg) was sufficient for maintaining vaccine potency. The final vaccine formulation or Alhydrogel(®) alone was subjected to three FT-cycles. The vaccines were characterized for antigen adsorption, rHBsAg tertiary structure, particle size and charge, adjuvant elemental content and in-vivo potency. Particle agglomeration of either vaccine particles or adjuvant was observed following FT-stress. In vivo studies demonstrated no statistical differences in IgG responses between vaccines with FT-stressed adjuvant and no adjuvant. Adsorption of rHBsAg was achieved; regardless of adjuvant treatment, suggesting that the similar responses were not due to soluble antigen in the frozen adjuvant-containing formulations. All vaccines with adjuvant, including the non-frozen controls, yielded similar, blue-shifted fluorescence emission spectra. Immune response differences could not be traced to differences in the tertiary structure of the antigen in the formulations. Zeta potential measurements and elemental content analyses suggest that FT-stress resulted in a significant chemical alteration of the adjuvant surface. This data provides evidence that protecting a freeze-labile antigen from subzero exposure is insufficient to maintain vaccine potency. Future studies should focus on adjuvant protection. To our knowledge, this is the first study to systematically investigate how FT-stress to adjuvant alone affects immunogenicity. It provides definitive evidence that this damage is sufficient to reduce vaccine potency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  9. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  10. Edge co-occurrences can account for rapid categorization of natural versus animal images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.; Bednar, James A.

    2015-06-01

    Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

  11. Weighting Statistical Inputs for Data Used to Support Effective Decision Making During Severe Emergency Weather and Environmental Events

    NASA Technical Reports Server (NTRS)

    Gardner, Adrian

    2010-01-01

    National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.

  12. Distinguishing Positive Selection From Neutral Evolution: Boosting the Performance of Summary Statistics

    PubMed Central

    Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas

    2011-01-01

    Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556

  13. Time series analysis for minority game simulations of financial markets

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando F.; Francisco, Gerson; Machado, Birajara S.; Muruganandam, Paulsamy

    2003-04-01

    The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary.

  14. Validity criteria for Fermi’s golden rule scattering rates applied to metallic nanowires

    NASA Astrophysics Data System (ADS)

    Moors, Kristof; Sorée, Bart; Magnus, Wim

    2016-09-01

    Fermi’s golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.

  15. Sound Processing Features for Speaker-Dependent and Phrase-Independent Emotion Recognition in Berlin Database

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia

    An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.

  16. Theoretical analysis of HVAC duct hanger systems

    NASA Technical Reports Server (NTRS)

    Miller, R. D.

    1987-01-01

    Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.

  17. Metaresearch for Evaluating Reproducibility in Ecology and Evolution.

    PubMed

    Fidler, Fiona; Chee, Yung En; Wintle, Bonnie C; Burgman, Mark A; McCarthy, Michael A; Gordon, Ascelin

    2017-03-01

    Recent replication projects in other disciplines have uncovered disturbingly low levels of reproducibility, suggesting that those research literatures may contain unverifiable claims. The conditions contributing to irreproducibility in other disciplines are also present in ecology. These include a large discrepancy between the proportion of "positive" or "significant" results and the average statistical power of empirical research, incomplete reporting of sampling stopping rules and results, journal policies that discourage replication studies, and a prevailing publish-or-perish research culture that encourages questionable research practices. We argue that these conditions constitute sufficient reason to systematically evaluate the reproducibility of the evidence base in ecology and evolution. In some cases, the direct replication of ecological research is difficult because of strong temporal and spatial dependencies, so here, we propose metaresearch projects that will provide proxy measures of reproducibility.

  18. Cosmic microwave background power asymmetry from non-Gaussian modulation.

    PubMed

    Schmidt, Fabian; Hui, Lam

    2013-01-04

    Non-Gaussianity in the inflationary perturbations can couple observable scales to modes of much longer wavelength (even superhorizon), leaving as a signature a large-angle modulation of the observed cosmic microwave background power spectrum. This provides an alternative origin for a power asymmetry that is otherwise often ascribed to a breaking of statistical isotropy. The non-Gaussian modulation effect can be significant even for typical ~10(-5) perturbations while respecting current constraints on non-Gaussianity if the squeezed limit of the bispectrum is sufficiently infrared divergent. Just such a strongly infrared-divergent bispectrum has been claimed for inflation models with a non-Bunch-Davies initial state, for instance. Upper limits on the observed cosmic microwave background power asymmetry place stringent constraints on the duration of inflation in such models.

  19. Influenza forecasting with Google Flu Trends.

    PubMed

    Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E

    2013-01-01

    We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.

  20. Modeling Human-Computer Decision Making with Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Coovert, Michael D.; And Others

    Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…

  1. 9 CFR 3.80 - Primary enclosures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... injuring themselves; and (xi) Provide sufficient space for the nonhuman primates to make normal postural... maintained so as to provide sufficient space to allow each nonhuman primate to make normal postural... postural adjustments and movements within the primary enclosure. Different species of prosimians vary in...

  2. Statistical error propagation in ab initio no-core full configuration calculations of light nuclei

    DOE PAGES

    Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...

    2015-12-28

    We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .

  3. Reciprocity in directed networks

    NASA Astrophysics Data System (ADS)

    Yin, Mei; Zhu, Lingjiong

    2016-04-01

    Reciprocity is an important characteristic of directed networks and has been widely used in the modeling of World Wide Web, email, social, and other complex networks. In this paper, we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble, the canonical ensemble, and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities. The sparse case is also studied for the grand canonical ensemble. Extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed.

  4. Entrepreneurship by any other name: self-sufficiency versus innovation.

    PubMed

    Parker Harris, Sarah; Caldwell, Kate; Renko, Maija

    2014-01-01

    Entrepreneurship has been promoted as an innovative strategy to address the employment of people with disabilities. Research has predominantly focused on the self-sufficiency aspect without fully integrating entrepreneurship literature in the areas of theory, systems change, and demonstration projects. Subsequently there are gaps in services, policies, and research in this field that, in turn, have limited our understanding of the support needs and barriers or facilitators of entrepreneurs with disabilities. A thorough analysis of the literature in these areas led to the development of two core concepts that need to be addressed in integrating entrepreneurship into disability employment research and policy: clarity in operational definitions and better disability statistics and outcome measures. This article interrogates existing research and policy efforts in this regard to argue for a necessary shift in the field from focusing on entrepreneurship as self-sufficiency to understanding entrepreneurship as innovation.

  5. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  6. Introducing local property tax for fiscal decentralization and local authority autonomy

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Thomas; Labropoulos, Tassos; Hadjimitsis, Diafantos G.

    2015-06-01

    Charles Tiebout (1956), in his work "A Pure Theory of Local Expenditures", provides a vision of the workings of the local public sector, acknowledging many similarities to the features of a competitive market, however omitting any references to local taxation. Contrary to other researchers' claim that the Tiebout model and the theory of fiscal decentralization are by no means synonymous, this paper aims to expand Tiebout's theory, by adding the local property tax in the context, introducing a fair, ad valorem property taxation system based on the automated assessment of the value of real estate properties within the boundaries of local authorities. Computer Assisted Mass Appraisal methodology integrated with Remote Sensing technology and GIS analysis is applied to local authorities' property registries and cadastral data, building a spatial relational database and providing data to be statistically processed through Multiple Regression Analysis modeling. The proposed scheme accomplishes economy of scale using CAMA procedures on one hand, but also succeeds in making local authorities self-sufficient through a decentralized, fair, locally calibrated property taxation model, providing rational income administration.

  7. With directed study before a 4-day operating room management course, trust in the content did not change progressively during the classroom time.

    PubMed

    Dexter, Franklin; Epstein, Richard H; Fahy, Brenda G; Van Swol, Lyn M

    2017-11-01

    A 4-day course in operating room (OR) management is sufficient to provide anesthesiologists with the knowledge and problem solving skills needed to participate in projects of the systems-based-practice competency. Anesthesiologists may need to learn fewer topics when the objective is, instead, limited to comprehension of decision-making on the day of surgery, We tested the hypothesis that trust in course content would not increase further after completion of topics related to OR decision-making on the day of surgery. Panel survey. A 4-day 35hour course in OR management. Mandatory assignments before classes were: 1) review of statistics at a level slightly less than required of anesthesiology residents by the American Board of Anesthesiology; and 2) reading of peer-reviewed published articles while learning the scientific vocabulary. N=31 course participants who each attended 1 of 4 identical courses. At the end of each of the 4days, course participants completed a 9-item scale assessing trust in the course content, namely, its quality, usefulness, and reliability. Cronbach alpha for the 1 to 7 trust scale was 0.94. The means±SD of scores were 5.86±0.80 after day #1, 5.81±0.76 after day #2, 5.80±0.77 after day #3, and 5.97±0.76 after day #4. Multiple methods of statistical analysis all found that there was no significant effect of the number of days of the course on trust in the content (all P≥0.30). Trust in the course content did not increase after the end of the 1st day. Therefore, statistics review, reading, and the 1st day of the course appear sufficient when the objective of teaching OR management is not that participants will learn how to make the decisions, but will comprehend them and trust in the information underlying knowledgeable decision-making. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Downscaling wind and wavefields for 21st century coastal flood hazard projections in a region of complex terrain

    USGS Publications Warehouse

    O'Neill, Andrea; Erikson, Li; Barnard, Patrick

    2017-01-01

    While global climate models (GCMs) provide useful projections of near-surface wind vectors into the 21st century, resolution is not sufficient enough for use in regional wave modeling. Statistically downscaled GCM projections from Multivariate Adaptive Constructed Analogues provide daily averaged near-surface winds at an appropriate spatial resolution for wave modeling within the orographically complex region of San Francisco Bay, but greater resolution in time is needed to capture the peak of storm events. Short-duration high wind speeds, on the order of hours, are usually excluded in statistically downscaled climate models and are of key importance in wave and subsequent coastal flood modeling. Here we present a temporal downscaling approach, similar to constructed analogues, for near-surface winds suitable for use in local wave models and evaluate changes in wind and wave conditions for the 21st century. Reconstructed hindcast winds (1975–2004) recreate important extreme wind values within San Francisco Bay. A computationally efficient method for simulating wave heights over long time periods was used to screen for extreme events. Wave hindcasts show resultant maximum wave heights of 2.2 m possible within the Bay. Changes in extreme over-water wind speeds suggest contrasting trends within the different regions of San Francisco Bay, but 21th century projections show little change in the overall magnitude of extreme winds and locally generated waves.

  9. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  10. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  11. [Rescue helicopters in secondary missions].

    PubMed

    Gorgass, B; Frey, G

    1977-11-03

    During the last five years, we have had to fly 560 primary and 1150 secondary missions with the rescue helicopter of the Ulm Rescue Centre. This relationship of approximately 1 : 2 is distinctly different from the numbers obtained in other helicopter bases. The geographical location and structure of the hospitals within range of the Ulm rescue helicopter account for the large proportion of urgent secondary missions. The evaluation of these secondary missions concurs with the ADAC statistics and shows that the quick transport of the emergency doctor to the scene of the emergency, is only one component in the functions of the rescue helicopter. During primary and secondary missions, the ability to transport emergency patients to the nearest qualified hospital by helicopter, which is a mobile intensive care unit, is of equal importance. In the future, rescue helicopters will have to take these requirements into account by providing the necessary equipment and more especially, by providing sufficient space to carry out emergency diagnostic and therapeutic treatment.

  12. Standoff imaging of a masked human face using a 670 GHz high resolution radar

    NASA Astrophysics Data System (ADS)

    Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.

    2011-11-01

    This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.

  13. Estimation of pyrethroid pesticide intake using regression ...

    EPA Pesticide Factsheets

    Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.

  14. Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2017-01-01

    The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.

  15. Towards Enhanced Underwater Lidar Detection via Source Separation

    NASA Astrophysics Data System (ADS)

    Illig, David W.

    Interest in underwater optical sensors has grown as technologies enabling autonomous underwater vehicles have been developed. Propagation of light through water is complicated by the dual challenges of absorption and scattering. While absorption can be reduced by operating in the blue-green region of the visible spectrum, reducing scattering is a more significant challenge. Collection of scattered light negatively impacts underwater optical ranging, imaging, and communications applications. This thesis concentrates on the ranging application, where scattering reduces operating range as well as range accuracy. The focus of this thesis is on the problem of backscatter, which can create a "clutter" return that may obscure submerged target(s) of interest. The main contributions of this thesis are explorations of signal processing approaches to increase the separation between the target and backscatter returns. Increasing this separation allows detection of weak targets in the presence of strong scatter, increasing both operating range and range accuracy. Simulation and experimental results will be presented for a variety of approaches as functions of water clarity and target position. This work provides several novel contributions to the underwater lidar field: 1. Quantification of temporal separation approaches: While temporal separation has been studied extensively, this work provides a quantitative assessment of the extent to which both high frequency modulation and spatial filter approaches improve the separation between target and backscatter. 2. Development and assessment of frequency separation: This work includes the first frequency-based separation approach for underwater lidar, in which the channel frequency response is measured with a wideband waveform. Transforming to the time-domain gives a channel impulse response, in which target and backscatter returns may appear in unique range bins and thus be separated. 3. Development and assessment of statistical separation: The first investigations of statistical separation approaches for underwater lidar are presented. By demonstrating that target and backscatter returns have different statistical properties, a new separation axis is opened. This work investigates and quantifies performance of three statistical separation approaches. 4. Application of detection theory to underwater lidar: While many similar applications use detection theory to assess performance, less development has occurred in the underwater lidar field. This work applies these concepts to statistical separation approaches, providing another perspective in which to assess performance. In addition, by using detection theory approaches, statistical metrics can be used to associate a level of confidence in each ranging measurement. 5. Preliminary investigation of forward scatter suppression: If backscatter is sufficiently suppressed, forward scattering becomes a performance-limiting factor. This work presents a proof-of-concept demonstration of the potential for statistical separation approaches to suppress both forward and backward scatter. These results provide a demonstration of the capability that signal processing has to improve separation between target and backscatter. Separation capability improves in the transition from temporal to frequency to statistical separation approaches, with the statistical separation approaches improving target detection sensitivity by as much as 30 dB. Ranging and detection results demonstrate the enhanced performance this would allow in ranging applications. This increased performance is an important step in moving underwater lidar capability towards the requirements of the next generation of sensors.

  16. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  17. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  18. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    NASA Technical Reports Server (NTRS)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  19. The old age health security in rural China: where to go?

    PubMed

    Dai, Baozhen

    2015-11-04

    The huge number of rural elders and the deepening health problems (e.g. growing threats of infectious diseases and chronic diseases etc.) place enormous pressure on old age health security in rural China. This study aims to provide information for policy-makers to develop effective measures for promoting rural elders' health care service access by examining the current developments and challenges confronted by the old age health security in rural China. Search resources are electronic databases, web pages of the National Bureau of Statistics of China and the National Health and Family Planning Commission of China on the internet, China Population and Employment Statistics Yearbook, China Civil Affairs' Statistical Yearbook and China Health Statistics Yearbooks etc. Articles were identified from Elsevier, Wiley, EBSCO, EMBASE, PubMed, SCI Expanded, ProQuest, and National Knowledge Infrastructure of China (CNKI) which is the most informative database in Chinese. Search terms were "rural", "China", "health security", "cooperative medical scheme", "social medical assistance", "medical insurance" or "community based medical insurance", "old", or "elder", "elderly", or "aged", "aging". Google scholar was searched with the same combination of keywords. The results showed that old age health security in rural China had expanded to all rural elders and substantially improved health care service utilization among rural elders. Increasing chronic disease prevalence rates, pressing public health issues, inefficient rural health care service provision system and lack of sufficient financing challenged the old age health security in rural China. Increasing funds from the central and regional governments for old age health security in rural China will contribute to reducing urban-rural disparities in provision of old age health security and increasing health equity among rural elders between different regions. Meanwhile, initiating provider payment reform may contribute to improving the efficiency of rural health care service provision system and promoting health care service access among rural elders.

  20. Assessment and Implication of Prognostic Imbalance in Randomized Controlled Trials with a Binary Outcome – A Simulation Study

    PubMed Central

    Chu, Rong; Walter, Stephen D.; Guyatt, Gordon; Devereaux, P. J.; Walsh, Michael; Thorlund, Kristian; Thabane, Lehana

    2012-01-01

    Background Chance imbalance in baseline prognosis of a randomized controlled trial can lead to over or underestimation of treatment effects, particularly in trials with small sample sizes. Our study aimed to (1) evaluate the probability of imbalance in a binary prognostic factor (PF) between two treatment arms, (2) investigate the impact of prognostic imbalance on the estimation of a treatment effect, and (3) examine the effect of sample size (n) in relation to the first two objectives. Methods We simulated data from parallel-group trials evaluating a binary outcome by varying the risk of the outcome, effect of the treatment, power and prevalence of the PF, and n. Logistic regression models with and without adjustment for the PF were compared in terms of bias, standard error, coverage of confidence interval and statistical power. Results For a PF with a prevalence of 0.5, the probability of a difference in the frequency of the PF≥5% reaches 0.42 with 125/arm. Ignoring a strong PF (relative risk = 5) leads to underestimating the strength of a moderate treatment effect, and the underestimate is independent of n when n is >50/arm. Adjusting for such PF increases statistical power. If the PF is weak (RR = 2), adjustment makes little difference in statistical inference. Conditional on a 5% imbalance of a powerful PF, adjustment reduces the likelihood of large bias. If an absolute measure of imbalance ≥5% is deemed important, including 1000 patients/arm provides sufficient protection against such an imbalance. Two thousand patients/arm may provide an adequate control against large random deviations in treatment effect estimation in the presence of a powerful PF. Conclusions The probability of prognostic imbalance in small trials can be substantial. Covariate adjustment improves estimation accuracy and statistical power, and hence should be performed when strong PFs are observed. PMID:22629322

  1. Structured Ordinary Least Squares: A Sufficient Dimension Reduction approach for regressions with partitioned predictors and heterogeneous units.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Li, Bing

    2017-06-01

    In many scientific and engineering fields, advanced experimental and computing technologies are producing data that are not just high dimensional, but also internally structured. For instance, statistical units may have heterogeneous origins from distinct studies or subpopulations, and features may be naturally partitioned based on experimental platforms generating them, or on information available about their roles in a given phenomenon. In a regression analysis, exploiting this known structure in the predictor dimension reduction stage that precedes modeling can be an effective way to integrate diverse data. To pursue this, we propose a novel Sufficient Dimension Reduction (SDR) approach that we call structured Ordinary Least Squares (sOLS). This combines ideas from existing SDR literature to merge reductions performed within groups of samples and/or predictors. In particular, it leads to a version of OLS for grouped predictors that requires far less computation than recently proposed groupwise SDR procedures, and provides an informal yet effective variable selection tool in these settings. We demonstrate the performance of sOLS by simulation and present a first application to genomic data. The R package "sSDR," publicly available on CRAN, includes all procedures necessary to implement the sOLS approach. © 2016, The International Biometric Society.

  2. A comparison of different densities of levobupivacaine solutions for unilateral spinal anaesthesia.

    PubMed

    Yağan, Özgür; Taş, Nilay; Küçük, Ahmet; Hancı, Volkan

    2016-01-01

    The aim of the study was to compare the block characteristics and clinical effects of dextrose added to levobupivacaine solutions at different concentrations to provide unilateral spinal anaesthesia in lower extremity surgery. This prospective, randomised, double-blind study comprised 75 ASA I-II risk patients for whom unilateral total knee arthroscopy was planned. The patients were assigned to three groups: in Group I, 60mg dextrose was added to 7.5mg of 0.5% levobupivacaine, in Group II, 80mg and in Group III, 100mg. Spinal anaesthesia was applied to the patient in the lateral decubitus position with the operated side below and the patient was kept in position for 10min. The time for the sensorial block to achieve T12 level was slower in Group I than in Groups II and III (p<0.05, p<0.00). The time to full recovery of the sensorial block was 136min in Group I, 154min in Group II and 170min in Group III. The differences were statistically significant (p<0.05). The mean duration of the motor block was 88min in Group I, 105min in Group II, and 139min in Group III and the differences were statistically significant (p<0.05). The time to urination in Group I was statistically significantly shorter than in the other groups (p<0.00). The results of the study showed that together with an increase in density, the sensory and motor block duration was lengthened. It can be concluded that 30mg mL(-1) concentration of dextrose added to 7.5mg levobupivacaine is sufficient to provide unilateral spinal anaesthesia in day-case arthroscopic knee surgery. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  3. [A comparison of different densities of levobupivacaine solutions for unilateral spinal anaesthesia].

    PubMed

    Yağan, Özgür; Taş, Nilay; Küçük, Ahmet; Hancı, Volkan

    2016-01-01

    The aim of the study was to compare the block characteristics and clinical effects of dextrose added to levobupivacaine solutions at different concentrations to provide unilateral spinal anaesthesia in lower extremity surgery. This prospective, randomised, double-blind study comprised 75 ASA I-II risk patients for whom unilateral total knee arthroscopy was planned. The patients were assigned to three groups: in Group I, 60mg dextrose was added to 7.5mg of 0.5% levobupivacaine, in Group II, 80mg and in Group III, 100mg. Spinal anaesthesia was applied to the patient in the lateral decubitus position with the operated side below and the patient was kept in position for 10min. The time for the sensorial block to achieve T12 level was slower in Group I than in Groups II and III (p<0.05, p<0.00). The time to full recovery of the sensorial block was 136min in Group I, 154min in Group II and 170min in Group III. The differences were statistically significant (p<0.05). The mean duration of the motor block was 88min in Group I, 105min in Group II, and 139min in Group III and the differences were statistically significant (p<0.05). The time to urination in Group I was statistically significantly shorter than in the other groups (p<0.00). The results of the study showed that together with an increase in density, the sensory and motor block duration was lengthened. It can be concluded that 30mgmL(-1) concentration of dextrose added to 7.5mg levobupivacaine is sufficient to provide unilateral spinal anaesthesia in day-case arthroscopic knee surgery. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  4. Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.

    PubMed

    Hunt, W F; Thomas, V G; Stiefel, W

    1999-09-01

    Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.

  5. Statistical approach for the retrieval of phytoplankton community structures from in situ fluorescence measurements.

    PubMed

    Wang, Shengqiang; Xiao, Cong; Ishizaka, Joji; Qiu, Zhongfeng; Sun, Deyong; Xu, Qian; Zhu, Yuanli; Huan, Yu; Watanabe, Yuji

    2016-10-17

    Knowledge of phytoplankton community structures is important to the understanding of various marine biogeochemical processes and ecosystem. Fluorescence excitation spectra (F(λ)) provide great potential for studying phytoplankton communities because their spectral variability depends on changes in the pigment compositions related to distinct phytoplankton groups. Commercial spectrofluorometers have been developed to analyze phytoplankton communities by measuring the field F(λ), but estimations using the default methods are not always accurate because of their strong dependence on norm spectra, which are obtained by culturing pure algae of a given group and are assumed to be constant. In this study, we proposed a novel approach for estimating the chlorophyll a (Chl a) fractions of brown algae, cyanobacteria, green algae and cryptophytes based on a data set collected in the East China Sea (ECS) and the Tsushima Strait (TS), with concurrent measurements of in vivo F(λ) and phytoplankton communities derived from pigments analysis. The new approach blends various statistical features by computing the band ratios and continuum-removed spectra of F(λ) without requiring a priori knowledge of the norm spectra. The model evaluations indicate that our approach yields good estimations of the Chl a fractions, with root-mean-square errors of 0.117, 0.078, 0.072 and 0.060 for brown algae, cyanobacteria, green algae and cryptophytes, respectively. The statistical analysis shows that the models are generally robust to uncertainty in F(λ). We recommend using a site-specific model for more accurate estimations. To develop a site-specific model in the ECS and TS, approximately 26 samples are sufficient for using our approach, but this conclusion needs to be validated in additional regions. Overall, our approach provides a useful technical basis for estimating phytoplankton communities from measurements of F(λ).

  6. Fuel cell current collector

    DOEpatents

    Katz, Murray; Bonk, Stanley P.; Maricle, Donald L.; Abrams, Martin

    1991-01-01

    A fuel cell has a current collector plate (22) located between an electrode (20) and a separate plate (25). The collector plate has a plurality of arches (26, 28) deformed from a single flat plate in a checkerboard pattern. The arches are of sufficient height (30) to provide sufficient reactant flow area. Each arch is formed with sufficient stiffness to accept compressive load and sufficient resiliently to distribute the load and maintain electrical contact.

  7. Prevention of Surgical Fires: A Certification Course for Healthcare Providers.

    PubMed

    Fisher, Marquessa

    2015-08-01

    An estimated 550 to 650 surgical fires occur annually in the United States. Surgical fires may have severe consequences, including burns, disfigurement, long-term medical care, or death. This article introduces a potential certification program for the prevention of surgical fires. A pilot study was conducted with a convenience sample of 10 anesthesia providers who participated in the education module. The overall objective was to educate surgical team members and to prepare them to become certified in surgical fire prevention. On completion of the education module, participants completed the 50-question certification examination. The mean pretest score was 66%; none of the participants had enough correct responses (85%) to be considered competent in surgical fire prevention. The mean post- test score was 92.80%, with all participants answering at least 85% of questions correct. A paired-samples t test showed a statistically significant increase in knowledge: t (df = 9) = 11.40; P = .001. Results of the pilot study indicate that this course can remediate gaps in knowledge of surgical fire prevention for providers. Their poor performance on the pretest suggests that many providers may not receive sufficient instruction in surgical fire prevention.

  8. Control of Laser Plasma Based Accelerators up to 1 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Kei

    2007-12-01

    This dissertation documents the development of a broadband electron spectrometer (ESM) for GeV class Laser Wakefield Accelerators (LWFA), the production of high quality GeV electron beams (e-beams) for the first time in a LWFA by using a capillary discharge guide (CDG), and a statistical analysis of CDG-LWFAs. An ESM specialized for CDG-LWFAs with an unprecedented wide momentum acceptance, from 0.01 to 1.1 GeV in a single shot, has been developed. Simultaneous measurement of e-beam spectra and output laser properties as well as a large angular acceptance (> ± 10 mrad) were realized by employing a slitless scheme. A scintillating screenmore » (LANEX Fast back, LANEX-FB)--camera system allowed faster than 1 Hz operation and evaluation of the spatial properties of e-beams. The design provided sufficient resolution for the whole range of the ESM (below 5% for beams with 2 mrad divergence). The calibration between light yield from LANEX-FB and total charge, and a study on the electron energy dependence (0.071 to 1.23 GeV) of LANEX-FB were performed at the Advanced light source (ALS), Lawrence Berkeley National Laboratory (LBNL). Using this calibration data, the developed ESM provided a charge measurement as well. The production of high quality electron beams up to 1 GeV from a centimeter-scale accelerator was demonstrated. The experiment used a 310 μm diameter gas-filled capillary discharge waveguide that channeled relativistically-intense laser pulses (42 TW, 4.5 x 10 18 W/cm 2) over 3.3 centimeters of sufficiently low density (≃ 4.3 x 10 18/cm 3) plasma. Also demonstrated was stable self-injection and acceleration at a beam energy of ≃ 0.5 GeV by using a 225 μm diameter capillary. Relativistically-intense laser pulses (12 TW, 1.3 x 10 18W/cm 2) were guided over 3.3 centimeters of low density (≃ 3.5 x 10 18/cm 3) plasma in this experiment. A statistical analysis of the CDG-LWFAs performance was carried out. By taking advantage of the high repetition rate experimental system, several thousands of shots were taken in a broad range of the laser and plasma parameters. An analysis program was developed to sort and select the data by specified parameters, and then to evaluate performance statistically. The analysis suggested that the generation of GeV-level beams comes from a highly unstable and regime. By having the plasma density slightly above the threshold density for self injection, (1) the longest dephasing length possible was provided, which led to the generation of high energy e-beams, and (2) the number of electrons injected into the wakefield was kept small, which led to the generation of high quality (low energy spread) e-beams by minimizing the beam loading effect on the wake. The analysis of the stable half-GeV beam regime showed the requirements for stable self injection and acceleration. A small change of discharge delay t dsc, and input energy E in, significantly affected performance. The statistical analysis provided information for future optimization, and suggested possible schemes for improvement of the stability and higher quality beam generation. A CDG-LWFA is envisioned as a construction block for the next generation accelerator, enabling significant cost and size reductions.« less

  9. 24 CFR 572.110 - Identifying and selecting eligible families for homeownership.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... otherwise qualified eligible families who have completed participation in one of the following economic self-sufficiency programs: Project Self-Sufficiency, Operation Bootstrap, Family Self-Sufficiency, JOBS, and any... for the disclosure and verification of social security numbers, as provided by part 5, subpart B, of...

  10. Effects of different centrifugation conditions on clinical chemistry and Immunology test results.

    PubMed

    Minder, Elisabeth I; Schibli, Adrian; Mahrer, Dagmar; Nesic, Predrag; Plüer, Kathrin

    2011-05-10

    The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study.

  11. Effects of different centrifugation conditions on clinical chemistry and Immunology test results

    PubMed Central

    2011-01-01

    Background The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. Methods We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Results Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. Conclusion A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study. PMID:21569233

  12. Does reviewing lead to better learning and decision making? Answers from a randomized stock market experiment.

    PubMed

    Wessa, Patrick; Holliday, Ian E

    2012-01-01

    The literature is not univocal about the effects of Peer Review (PR) within the context of constructivist learning. Due to the predominant focus on using PR as an assessment tool, rather than a constructivist learning activity, and because most studies implicitly assume that the benefits of PR are limited to the reviewee, little is known about the effects upon students who are required to review their peers. Much of the theoretical debate in the literature is focused on explaining how and why constructivist learning is beneficial. At the same time these discussions are marked by an underlying presupposition of a causal relationship between reviewing and deep learning. The purpose of the study is to investigate whether the writing of PR feedback causes students to benefit in terms of: perceived utility about statistics, actual use of statistics, better understanding of statistical concepts and associated methods, changed attitudes towards market risks, and outcomes of decisions that were made. We conducted a randomized experiment, assigning students randomly to receive PR or non-PR treatments and used two cohorts with a different time span. The paper discusses the experimental design and all the software components that we used to support the learning process: Reproducible Computing technology which allows students to reproduce or re-use statistical results from peers, Collaborative PR, and an AI-enhanced Stock Market Engine. The results establish that the writing of PR feedback messages causes students to experience benefits in terms of Behavior, Non-Rote Learning, and Attitudes, provided the sequence of PR activities are maintained for a period that is sufficiently long.

  13. Implicit Language Learning: Adults' Ability to Segment Words in Norwegian

    ERIC Educational Resources Information Center

    Kittleson, Megan M.; Aguilar, Jessica M.; Tokerud, Gry Line; Plante, Elena; Asbjornsen, Arve E.

    2010-01-01

    Previous language learning research reveals that the statistical properties of the input offer sufficient information to allow listeners to segment words from fluent speech in an artificial language. The current pair of studies uses a natural language to test the ecological validity of these findings and to determine whether a listener's language…

  14. Extreme Vertical Gusts in the Atmospheric Boundary Layer

    DTIC Science & Technology

    2015-07-01

    significant effect on the statistics of the rare, extreme gusts. In the lowest 5,000 ft, boundary layer effects make small to moderate vertical...4 2.4 Effects of Gust Shape ............................................................................................... 5... Definitions Adiabatic Lapse Rate The rate of change of temperature with altitude that would occur if a parcel of air was transported sufficiently

  15. Lod score curves for phase-unknown matings.

    PubMed

    Hulbert-Shearon, T; Boehnke, M; Lange, K

    1996-01-01

    For a phase-unknown nuclear family, we show that the likelihood and lod score are unimodal, and we describe conditions under which the maximum occurs at recombination fraction theta = 0, theta = 1/2, and 0 < theta < 1/2. These simply stated necessary and sufficient conditions seem to have escaped the notice of previous statistical geneticists.

  16. 76 FR 17107 - Fisheries of the Exclusive Economic Zone Off Alaska; Application for an Exempted Fishing Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... experimental design requires this quantity of salmon to ensure statistically valid results. The applicant also... encounters sufficient concentrations of salmon and pollock for meeting the experimental design. Groundfish... of the groundfish harvested is expected to be pollock. The experimental design requires this quantity...

  17. Multimedia Presentations in Educational Measurement and Statistics: Design Considerations and Instructional Approaches

    ERIC Educational Resources Information Center

    Sklar, Jeffrey C.; Zwick, Rebecca

    2009-01-01

    Proper interpretation of standardized test scores is a crucial skill for K-12 teachers and school personnel; however, many do not have sufficient knowledge of measurement concepts to appropriately interpret and communicate test results. In a recent four-year project funded by the National Science Foundation, three web-based instructional…

  18. Bootstrapping in a Language of Thought: A Formal Model of Numerical Concept Learning

    ERIC Educational Resources Information Center

    Piantadosi, Steven T.; Tenenbaum, Joshua B.; Goodman, Noah D.

    2012-01-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful…

  19. Housing Survey. Campus Housing: Finding the Balance

    ERIC Educational Resources Information Center

    O'Connor, Shannon

    2016-01-01

    Depending on where you look for statistics, the number of students enrolling in colleges or universities is increasing, decreasing or remaining the about the same. Regardless of those trends, campus housing is a marketing tool for institutions looking to draw students to and keep them on campus. Schools need to offer sufficient beds and…

  20. Telehealth Consultation in a Self-Contained Classroom for Behavior: A Pilot Study

    ERIC Educational Resources Information Center

    Knowles, Christen; Massar, Michelle; Raulston, Tracy Jane; Machalicek, Wendy

    2017-01-01

    Students with challenging behavior severe enough to warrant placement in a self-contained special education classroom statistically have poor school and post-school outcomes compared to typical peers. Teachers in these classrooms often lack sufficient training to meet student needs. This pilot study investigated the use of a telehealth…

  1. The Comic Book Project: Forging Alternative Pathways to Literacy

    ERIC Educational Resources Information Center

    Bitz, Michael

    2004-01-01

    Many deep-rooted problems in urban areas of the United States--including crime, poverty, and poor health--correlate with illiteracy. The statistics reported by organizations such as the National Alliance for Urban Literacy Coalitions are telling. Urban citizens who cannot read sufficiently are at a clear disadvantage in life. They are more likely…

  2. Chemical-agnostic hazard prediction: statistical inference of in vitro toxicity pathways from proteomics responses to chemical mixtures

    EPA Science Inventory

    Toxicity pathways have been defined as normal cellular pathways that, when sufficiently perturbed as a consequence of chemical exposure, lead to an adverse outcome. If an exposure alters one or more normal biological pathways to an extent that leads to an adverse toxicity outcome...

  3. The Importance of Physical Fitness versus Physical Activity for Coronary Artery Disease Risk Factors: A Cross-Sectional Analysis.

    ERIC Educational Resources Information Center

    Young, Deborah Rohm; Steinhardt, Mary A.

    1993-01-01

    This cross-sectional study examined relationships among physical fitness, physical activity, and risk factors for coronary artery disease (CAD) in male police officers. Data from screenings and physical fitness assessments indicated physical activity must be sufficient to influence fitness before obtaining statistically significant risk-reducing…

  4. Attention-Deficit/Hyperactivity Disorder Symptoms in Preschool Children: Examining Psychometric Properties Using Item Response Theory

    ERIC Educational Resources Information Center

    Purpura, David J.; Wilson, Shauna B.; Lonigan, Christopher J.

    2010-01-01

    Clear and empirically supported diagnostic symptoms are important for proper diagnosis and treatment of psychological disorders. Unfortunately, the symptoms of many disorders presented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) lack sufficient psychometric…

  5. Seventy Years of the EPR Paradox

    NASA Astrophysics Data System (ADS)

    Kupczynski, Marian

    2006-11-01

    In spite of the fact that statistical predictions of quantum theory (QT) can only be tested if large amount of data is available a claim has been made that QT provides the most complete description of an individual physical system. Einstein's opposition to this claim and the paradox he presented in the article written together with Podolsky and Rosen in 1935 inspired generations of physicists in their quest for better understanding of QT. Seventy years after EPR article it is clear that without deep understanding of the character and limitations of QT one may not hope to find a meaningful unified theory of all physical interactions, manipulate qubits or construct a quantum computer.. In this paper we present shortly the EPR paper, the discussion, which followed it and Bell inequalities (BI). To avoid various paradoxes we advocate purely statistical contextual interpretation (PSC) of QT. According to PSC a state vector is not an attribute of a single electron, photon, trapped ion or quantum dot. A value of an observable assigned to a physical system has only a meaning in a context of a particular physical experiment PSC does not provide any mental space-time picture of sub phenomena. The EPR paradox is avoided because the reduction of the state vector in the measurement process is a passage from a description of the whole ensemble of the experimental results to a particular sub-ensemble of these results. We show that the violation of BI is neither a proof of the completeness of QT nor of its non-locality. Therefore we rephrase the EPR question and ask whether QT is "predictably "complete or in other words does it provide the complete description of experimental data. To test the "predictable completeness" it is not necessary to perform additional experiments it is sufficient to analyze more in detail the existing experimental data by using various non-parametric purity tests and other specific statistical tools invented to study the fine structure the time-series.

  6. Assessment of credit risk based on fuzzy relations

    NASA Astrophysics Data System (ADS)

    Tsabadze, Teimuraz

    2017-06-01

    The purpose of this paper is to develop a new approach for an assessment of the credit risk to corporate borrowers. There are different models for borrowers' risk assessment. These models are divided into two groups: statistical and theoretical. When assessing the credit risk for corporate borrowers, statistical model is unacceptable due to the lack of sufficiently large history of defaults. At the same time, we cannot use some theoretical models due to the lack of stock exchange. In those cases, when studying a particular borrower given that statistical base does not exist, the decision-making process is always of expert nature. The paper describes a new approach that may be used in group decision-making. An example of the application of the proposed approach is given.

  7. Identifying sources of fugitive emissions in industrial facilities using trajectory statistical methods

    NASA Astrophysics Data System (ADS)

    Brereton, Carol A.; Johnson, Matthew R.

    2012-05-01

    Fugitive pollutant sources from the oil and gas industry are typically quite difficult to find within industrial plants and refineries, yet they are a significant contributor of global greenhouse gas emissions. A novel approach for locating fugitive emission sources using computationally efficient trajectory statistical methods (TSM) has been investigated in detailed proof-of-concept simulations. Four TSMs were examined in a variety of source emissions scenarios developed using transient CFD simulations on the simplified geometry of an actual gas plant: potential source contribution function (PSCF), concentration weighted trajectory (CWT), residence time weighted concentration (RTWC), and quantitative transport bias analysis (QTBA). Quantitative comparisons were made using a correlation measure based on search area from the source(s). PSCF, CWT and RTWC could all distinguish areas near major sources from the surroundings. QTBA successfully located sources in only some cases, even when provided with a large data set. RTWC, given sufficient domain trajectory coverage, distinguished source areas best, but otherwise could produce false source predictions. Using RTWC in conjunction with CWT could overcome this issue as well as reduce sensitivity to noise in the data. The results demonstrate that TSMs are a promising approach for identifying fugitive emissions sources within complex facility geometries.

  8. Generalized energy measurements and modified transient quantum fluctuation theorems

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter

    2014-05-01

    Determining the work which is supplied to a system by an external agent provides a crucial step in any experimental realization of transient fluctuation relations. This, however, poses a problem for quantum systems, where the standard procedure requires the projective measurement of energy at the beginning and the end of the protocol. Unfortunately, projective measurements, which are preferable from the point of view of theory, seem to be difficult to implement experimentally. We demonstrate that, when using a particular type of generalized energy measurements, the resulting work statistics is simply related to that of projective measurements. This relation between the two work statistics entails the existence of modified transient fluctuation relations. The modifications are exclusively determined by the errors incurred in the generalized energy measurements. They are universal in the sense that they do not depend on the force protocol. Particularly simple expressions for the modified Crooks relation and Jarzynski equality are found for Gaussian energy measurements. These can be obtained by a sequence of sufficiently many generalized measurements which need not be Gaussian. In accordance with the central limit theorem, this leads to an effective error reduction in the individual measurements and even yields a projective measurement in the limit of infinite repetitions.

  9. Radiative PQ breaking and the Higgs boson mass

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio

    2015-06-01

    The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.

  10. Prognostic value of heart rate turbulence for risk assessment in patients with unstable angina and non-ST elevation myocardial infarction

    PubMed Central

    Harris, Patricia RE; Stein, Phyllis K; Fung, Gordon L; Drew, Barbara J

    2013-01-01

    Background We sought to examine the prognostic value of heart rate turbulence derived from electrocardiographic recordings initiated in the emergency department for patients with non-ST elevation myocardial infarction (NSTEMI) or unstable angina. Methods Twenty-four-hour Holter recordings were started in patients with cardiac symptoms approximately 45 minutes after arrival in the emergency department. Patients subsequently diagnosed with NSTEMI or unstable angina who had recordings with ≥18 hours of sinus rhythm and sufficient data to compute Thrombolysis In Myocardial Infarction (TIMI) risk scores were chosen for analysis (n = 166). Endpoints were emergent re-entry to the cardiac emergency department and/or death at 30 days and one year. Results In Cox regression models, heart rate turbulence and TIMI risk scores together were significant predictors of 30-day (model chi square 13.200, P = 0.001, C-statistic 0.725) and one-year (model chi square 31.160, P < 0.001, C-statistic 0.695) endpoints, outperforming either measure alone. Conclusion Measurement of heart rate turbulence, initiated upon arrival at the emergency department, may provide additional incremental value in the risk assessment for patients with NSTEMI or unstable angina. PMID:23976860

  11. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  12. Systematic Search for Rings around Kepler Planet Candidates: Constraints on Ring Size and Occurrence Rate

    NASA Astrophysics Data System (ADS)

    Aizawa, Masataka; Masuda, Kento; Kawahara, Hajime; Suto, Yasushi

    2018-05-01

    We perform a systematic search for rings around 168 Kepler planet candidates with sufficient signal-to-noise ratios that are selected from all of the short-cadence data. We fit ringed and ringless models to their light curves and compare the fitting results to search for the signatures of planetary rings. First, we identify 29 tentative systems, for which the ringed models exhibit statistically significant improvement over the ringless models. The light curves of those systems are individually examined, but we are not able to identify any candidate that indicates evidence for rings. In turn, we find several mechanisms of false positives that would produce ringlike signals, and the null detection enables us to place upper limits on the size of the rings. Furthermore, assuming the tidal alignment between axes of the planetary rings and orbits, we conclude that the occurrence rate of rings larger than twice the planetary radius is less than 15%. Even though the majority of our targets are short-period planets, our null detection provides statistical and quantitative constraints on largely uncertain theoretical models of the origin, formation, and evolution of planetary rings.

  13. Human metabolic profiles are stably controlled by genetic and environmental variation

    PubMed Central

    Nicholson, George; Rantalainen, Mattias; Maher, Anthony D; Li, Jia V; Malmodin, Daniel; Ahmadi, Kourosh R; Faber, Johan H; Hallgrímsdóttir, Ingileif B; Barrett, Amy; Toft, Henrik; Krestyaninova, Maria; Viksna, Juris; Neogi, Sudeshna Guha; Dumas, Marc-Emmanuel; Sarkans, Ugis; The MolPAGE Consortium; Silverman, Bernard W; Donnelly, Peter; Nicholson, Jeremy K; Allen, Maxine; Zondervan, Krina T; Lindon, John C; Spector, Tim D; McCarthy, Mark I; Holmes, Elaine; Baunsgaard, Dorrit; Holmes, Chris C

    2011-01-01

    1H Nuclear Magnetic Resonance spectroscopy (1H NMR) is increasingly used to measure metabolite concentrations in sets of biological samples for top-down systems biology and molecular epidemiology. For such purposes, knowledge of the sources of human variation in metabolite concentrations is valuable, but currently sparse. We conducted and analysed a study to create such a resource. In our unique design, identical and non-identical twin pairs donated plasma and urine samples longitudinally. We acquired 1H NMR spectra on the samples, and statistically decomposed variation in metabolite concentration into familial (genetic and common-environmental), individual-environmental, and longitudinally unstable components. We estimate that stable variation, comprising familial and individual-environmental factors, accounts on average for 60% (plasma) and 47% (urine) of biological variation in 1H NMR-detectable metabolite concentrations. Clinically predictive metabolic variation is likely nested within this stable component, so our results have implications for the effective design of biomarker-discovery studies. We provide a power-calculation method which reveals that sample sizes of a few thousand should offer sufficient statistical precision to detect 1H NMR-based biomarkers quantifying predisposition to disease. PMID:21878913

  14. Distribution of guidance models for cardiac resynchronization therapy in the setting of multi-center clinical trials

    NASA Astrophysics Data System (ADS)

    Rajchl, Martin; Abhari, Kamyar; Stirrat, John; Ukwatta, Eranga; Cantor, Diego; Li, Feng P.; Peters, Terry M.; White, James A.

    2014-03-01

    Multi-center trials provide the unique ability to investigate novel techniques across a range of geographical sites with sufficient statistical power, the inclusion of multiple operators determining feasibility under a wider array of clinical environments and work-flows. For this purpose, we introduce a new means of distributing pre-procedural cardiac models for image-guided interventions across a large scale multi-center trial. In this method, a single core facility is responsible for image processing, employing a novel web-based interface for model visualization and distribution. The requirements for such an interface, being WebGL-based, are minimal and well within the realms of accessibility for participating centers. We then demonstrate the accuracy of our approach using a single-center pacemaker lead implantation trial with generic planning models.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jewitt, David, E-mail: jewitt@ucla.edu

    Asteroids near the Sun can attain equilibrium temperatures sufficient to induce surface modification from thermal fracture, desiccation, and decomposition of hydrated silicates. We present optical observations of nine asteroids with perihelia <0.25 AU (sub-solar temperatures {>=}800 K) taken to search for evidence of thermal modification. We find that the broadband colors of these objects are diverse but statistically indistinguishable from those of planet-crossing asteroids having perihelia near 1 AU. Furthermore, images of these bodies taken away from perihelion show no evidence for on-going mass-loss (model-dependent limits {approx}<1 kg s{sup -1}) that might result from thermal disintegration of the surface. Wemore » conclude that, while thermal modification may be an important process in the decay of near-Sun asteroids and in the production of debris, our new data provide no evidence for it.« less

  16. Bootstrapping in a language of thought: a formal model of numerical concept learning.

    PubMed

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2012-05-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Metaresearch for Evaluating Reproducibility in Ecology and Evolution

    PubMed Central

    Fidler, Fiona; Chee, Yung En; Wintle, Bonnie C.; Burgman, Mark A.; McCarthy, Michael A.; Gordon, Ascelin

    2017-01-01

    Abstract Recent replication projects in other disciplines have uncovered disturbingly low levels of reproducibility, suggesting that those research literatures may contain unverifiable claims. The conditions contributing to irreproducibility in other disciplines are also present in ecology. These include a large discrepancy between the proportion of “positive” or “significant” results and the average statistical power of empirical research, incomplete reporting of sampling stopping rules and results, journal policies that discourage replication studies, and a prevailing publish-or-perish research culture that encourages questionable research practices. We argue that these conditions constitute sufficient reason to systematically evaluate the reproducibility of the evidence base in ecology and evolution. In some cases, the direct replication of ecological research is difficult because of strong temporal and spatial dependencies, so here, we propose metaresearch projects that will provide proxy measures of reproducibility. PMID:28596617

  18. Comparative investigation of the effectiveness of face-to-face verbal training and educational pamphlets on readiness of patients before undergoing non-emergency surgeries

    PubMed Central

    Noorian, Cobra; Aein, Fereshteh

    2015-01-01

    Background: The thought of having a surgery can be stressful for everyone. Providing the necessary information to the patient can help both the patient and the treatment team. This study was conducted to compare the effectiveness of face-to-face verbal training and educational pamphlets on the readiness of patients for undergoing non-emergency surgeries. Materials and Methods: The study was a before–after randomized clinical trial. 90 patients scheduled to undergo non-emergency surgery who referred to Shahrekord Ayatollah Kashani Hospital in 2013 were distributed randomly and gradually into two experimental groups (group of face-to-face verbal training and group of educational pamphlet) and one control group. Dependent variable of the study was pre-surgery readiness. Data analysis was carried out by using SPSS statistical software. Statistical analysis were analysis of variance (ANOVA) and correlation test. Results: Results showed that the mean scores of pre-surgery readiness in both interventional groups were significantly higher than that in the control group after the intervention (P < 0.05). However, there was no significant difference between the two experimental groups (P > 0.05). Conclusions: Each of the methods of face-to-face verbal education and using the pamphlet could be equally effective in improving the readiness of the patients undergoing surgery. Therefore, in environments where the health care providers are facing with the pressure of work and lack of sufficient time for face-to-face verbal training, suitable educational pamphlets can be used to provide the necessary information to patients and prepare them for surgery. PMID:26097859

  19. Comparative investigation of the effectiveness of face-to-face verbal training and educational pamphlets on readiness of patients before undergoing non-emergency surgeries.

    PubMed

    Noorian, Cobra; Aein, Fereshteh

    2015-01-01

    The thought of having a surgery can be stressful for everyone. Providing the necessary information to the patient can help both the patient and the treatment team. This study was conducted to compare the effectiveness of face-to-face verbal training and educational pamphlets on the readiness of patients for undergoing non-emergency surgeries. The study was a before-after randomized clinical trial. 90 patients scheduled to undergo non-emergency surgery who referred to Shahrekord Ayatollah Kashani Hospital in 2013 were distributed randomly and gradually into two experimental groups (group of face-to-face verbal training and group of educational pamphlet) and one control group. Dependent variable of the study was pre-surgery readiness. Data analysis was carried out by using SPSS statistical software. Statistical analysis were analysis of variance (ANOVA) and correlation test. Results showed that the mean scores of pre-surgery readiness in both interventional groups were significantly higher than that in the control group after the intervention (P < 0.05). However, there was no significant difference between the two experimental groups (P > 0.05). Each of the methods of face-to-face verbal education and using the pamphlet could be equally effective in improving the readiness of the patients undergoing surgery. Therefore, in environments where the health care providers are facing with the pressure of work and lack of sufficient time for face-to-face verbal training, suitable educational pamphlets can be used to provide the necessary information to patients and prepare them for surgery.

  20. Noninstitutional births and newborn care practices among adolescent mothers in Bangladesh.

    PubMed

    Rahman, Mosiur; Haque, Syed Emdadul; Zahan, Sarwar; Islam, Ohidul

    2011-01-01

    To describe home-based newborn care practices among adolescent mothers in Bangladesh and to identify sociodemographic, antenatal care (ANC), and delivery care factors associated with these practices. The 2007 Bangladesh Demographic Health Survey, conducted from March 24 to August 11, 2007. Selected urban and rural areas of Bangladesh. A total of 580 adolescent women (aged 15-19 years) who had ever been married with noninstitutional births and having at least one child younger than 3 years of age. Outcomes included complete cord care, complete thermal protection, initiation of early breastfeeding, and postnatal care within 24 hours of birth. Descriptive statistics and multivariate logistic regression methods were employed in analyzing the data. Only 42.8% and 5.1% newborns received complete cord care and complete thermal protection. Only 44.6% of newborns were breastfed within 1 hour of birth. The proportion of the newborns that received postnatal care within 24 hours of birth was 9%, and of them 11% received care from medically trained providers (MTP). Higher level of maternal education and richest bands of wealth were associated with complete thermal care and postnatal care within 24 hours of birth but not with complete cord care and early breastfeeding. Use of sufficient ANC and assisted births by MTP were significantly associated with several of the newborn care practices. The association between newborn care practices of the adolescent mothers and sufficient ANC and skilled birth attendance suggest that expanding skilled birth attendance and providing ANC may be an effective strategy to promote essential and preventive newborn care. © 2011 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.

  1. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  2. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  3. The bag-of-frames approach: A not so sufficient model for urban soundscapes.

    PubMed

    Lagrange, Mathieu; Lafay, Grégoire; Défréville, Boris; Aucouturier, Jean-Julien

    2015-11-01

    The "bag-of-frames" (BOF) approach, which encodes audio signals as the long-term statistical distribution of short-term spectral features, is commonly regarded as an effective and sufficient way to represent environmental sound recordings (soundscapes). The present paper describes a conceptual replication of a use of the BOF approach in a seminal article using several other soundscape datasets, with results strongly questioning the adequacy of the BOF approach for the task. As demonstrated in this paper, the good accuracy originally reported with BOF likely resulted from a particularly permissive dataset with low within-class variability. Soundscape modeling, therefore, may not be the closed case it was once thought to be.

  4. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  5. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  6. Non-gaussian statistics of pencil beam surveys

    NASA Technical Reports Server (NTRS)

    Amendola, Luca

    1994-01-01

    We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.

  7. Six Guidelines for Interesting Research.

    PubMed

    Gray, Kurt; Wegner, Daniel M

    2013-09-01

    There are many guides on proper psychology, but far fewer on interesting psychology. This article presents six guidelines for interesting research. The first three-Phenomena First, Be Surprising, and Grandmothers, Not Scientists-suggest how to choose your research question; the last three-Be The Participant, Simple Statistics, and Powerful Beginnings-suggest how to answer your research question and offer perspectives on experimental design, statistical analysis, and effective communication. These guidelines serve as reminders that replicability is necessary but not sufficient for compelling psychological science. Interesting research considers subjective experience; it listens to the music of the human condition. © The Author(s) 2013.

  8. Whose statistical reasoning is facilitated by a causal structure intervention?

    PubMed

    McNair, Simon; Feeney, Aidan

    2015-02-01

    People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430-450, 2007) proposed that a causal Bayesian framework accounts for peoples' errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.

  9. Usefulness of Cochrane Skin Group reviews for clinical practice.

    PubMed

    Davila-Seijo, P; Batalla, A; Garcia-Doval, I

    2013-10-01

    Systematic reviews are one of the most important sources of information for evidence-based medicine. However, there is a general impression that these reviews rarely report results that provide sufficient evidence to change clinical practice. The aim of this study was to determine the percentage of Cochrane Skin Group reviews reporting results with the potential to guide clinical decision-making. We performed a bibliometric analysis of all the systematic reviews published by the Cochrane Skin Group up to 16 August, 2012. We retrieved 55 reviews, which were analyzed and graded independently by 2 investigators into 3 categories: 0 (insufficient evidence to support or reject the use of an intervention), 1 (insufficient evidence to support or reject the use of an intervention but sufficient evidence to support recommendations or suggestions), and 2 (sufficient evidence to support or reject the use of an intervention). Our analysis showed that 25.5% (14/55) of the studies did not provide sufficient evidence to support or reject the use of the interventions studied, 45.5% (25/25) provided sufficient but not strong evidence to support recommendations or suggestions, and 29.1% (16/55) provided strong evidence to support or reject the use of 1 or more of the interventions studied. Most of the systematic reviews published by the Cochrane Skin Group provide useful information to improve clinical practice. Clinicians should read these reviews and reconsider their current practice. Copyright © 2012 Elsevier España, S.L. and AEDV. All rights reserved.

  10. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  11. Designing a Qualitative Data Collection Strategy (QDCS) for Africa - Phase 1: A Gap Analysis of Existing Models, Simulations, and Tools Relating to Africa

    DTIC Science & Technology

    2012-06-01

    generalized behavioral model characterized after the fictional Seldon equations (the one elaborated upon by Isaac Asimov in the 1951 novel, The...Foundation). Asimov described the Seldon equations as essentially statistical models with historical data of a sufficient size and variability that they

  12. A Statistical Portrait of Well-Being in Early Adulthood. CrossCurrents. Issue 2. Publication # 2004-18

    ERIC Educational Resources Information Center

    Brown, Brett V.; Moore, Kristin A.; Bzostek, Sharon

    2004-01-01

    In this data brief, key characteristics of young adults in the United States at or around age 25 are described. These characteristics include: (1) educational attainment and financial self sufficiency; (2) health behaviors and family formation;and (3) civic involvement. In addition, separate descriptive portraits for the major racial groups and…

  13. STEM Attrition: College Students' Paths into and out of STEM Fields. Statistical Analysis Report. NCES 2014-001

    ERIC Educational Resources Information Center

    Chen, Xianglei

    2013-01-01

    Producing sufficient numbers of graduates who are prepared for science, technology, engineering, and mathematics (STEM) occupations has become a national priority in the United States. To attain this goal, some policymakers have targeted reducing STEM attrition in college, arguing that retaining more students in STEM fields in college is a…

  14. Phenotype profiling and multivariate statistical analysis of Spur-pruning type Grapevine in National Clonal Germplasm Repository (NCGR, Davis)

    USDA-ARS?s Scientific Manuscript database

    Most Korean vineyards employed spur-pruning type modified-T trellis system. This produce system is suitable to spur-pruning type cultivars. But most European table grape is not adaptable to this produce system because their fruitfulness is sufficient to cane-pruning type system. Total 20 of fruit ch...

  15. A Meta-Analysis of Suggestopedia, Suggestology, Suggestive-accelerative Learning and Teaching (SALT), and Super-learning.

    ERIC Educational Resources Information Center

    Moon, Charles E.; And Others

    Forty studies using one or more components of Lozanov's method of suggestive-accelerative learning and teaching were identified from a search of all issues of the "Journal of Suggestive-Accelerative Learning and Teaching." Fourteen studies contained sufficient statistics to compute effect sizes. The studies were coded according to substantive and…

  16. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  17. Market structure in U.S. southern pine roundwood

    Treesearch

    Matthew F. Bingham; Jeffrey P. Prestemon; Douglas J. MacNair; Robert C. Abt

    2003-01-01

    Time series of commodity prices from multiple locations can behave as if responding to forces of spatial arbitrage. cvcn while such prices may instead be responding similarly to common factors aside from spatial arbitrage. Hence, while the Law of One Price may hold as a statistical concept, its acceptance is not sufficient to conclude market integration. We tested...

  18. Further Results on Sufficient LMI Conditions for H∞ Static Output Feedback Control of Discrete-Time Systems

    NASA Astrophysics Data System (ADS)

    Feng, Zhi-Yong; Xu, Li; Matsushita, Shin-Ya; Wu, Min

    Further results on sufficient LMI conditions for H∞ static output feedback (SOF) control of discrete-time systems are presented in this paper, which provide some new insights into this issue. First, by introducing a slack variable with block-triangular structure and choosing the coordinate transformation matrix properly, the conservativeness of one kind of existing sufficient LMI condition is further reduced. Then, by introducing a slack variable with linear matrix equality constraint, another kind of sufficient LMI condition is proposed. Furthermore, the relation of these two kinds of LMI conditions are revealed for the first time through analyzing the effect of different choices of coordinate transformation matrices. Finally, a numerical example is provided to demonstrate the effectiveness and merits of the proposed methods.

  19. Quantifying Interannual Variability for Photovoltaic Systems in PVWatts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryberg, David Severin; Freeman, Janine; Blair, Nate

    2015-10-01

    The National Renewable Energy Laboratory's (NREL's) PVWatts is a relatively simple tool used by industry and individuals alike to easily estimate the amount of energy a photovoltaic (PV) system will produce throughout the course of a typical year. PVWatts Version 5 has previously been shown to be able to reasonably represent an operating system's output when provided with concurrent weather data, however this type of data is not available when estimating system output during future time frames. For this purpose PVWatts uses weather data from typical meteorological year (TMY) datasets which are available on the NREL website. The TMY filesmore » represent a statistically 'typical' year which by definition excludes anomalous weather patterns and as a result may not provide sufficient quantification of project risk to the financial community. It was therefore desired to quantify the interannual variability associated with TMY files in order to improve the understanding of risk associated with these projects. To begin to understand the interannual variability of a PV project, we simulated two archetypal PV system designs, which are common in the PV industry, in PVWatts using the NSRDB's 1961-1990 historical dataset. This dataset contains measured hourly weather data and spans the thirty years from 1961-1990 for 239 locations in the United States. To note, this historical dataset was used to compose the TMY2 dataset. Using the results of these simulations we computed several statistical metrics which may be of interest to the financial community and normalized the results with respect to the TMY energy prediction at each location, so that these results could be easily translated to similar systems. This report briefly describes the simulation process used and the statistical methodology employed for this project, but otherwise focuses mainly on a sample of our results. A short discussion of these results is also provided. It is our hope that this quantification of the interannual variability of PV systems will provide a starting point for variability considerations in future PV system designs and investigations. however this type of data is not available when estimating system output during future time frames.« less

  20. Evaluation of the Ticket to Work Program: Assessment of Post-Rollout Implementation and Early Impacts, Volume 1

    ERIC Educational Resources Information Center

    Thornton, Craig; Livermore, Gina; Fraker, Thomas; Stapleton, David; O'Day, Bonnie; Wittenburg, David; Weathers, Robert; Goodman, Nanette; Silva, Tim; Martin, Emily Sama; Gregory, Jesse; Wright, Debra; Mamun, Arif

    2007-01-01

    Ticket to Work and Self-Sufficiency program (TTW) was designed to enhance the market for services that help disability beneficiaries become economically self-sufficient by providing beneficiaries with a wide range of choices for obtaining services and to give employment-support service providers new financial incentives to serve beneficiaries…

  1. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  2. Strain tolerant microfilamentary superconducting wire

    DOEpatents

    Finnemore, D.K.; Miller, T.A.; Ostenson, J.E.; Schwartzkopf, L.A.; Sanders, S.C.

    1993-02-23

    A strain tolerant microfilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments.

  3. The 90-day report for SL4 experiment S019: UV stellar astronomy

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The use of Experiment S019 to obtain moderate dispersion stellar spectra extending down to 1300A with sufficient spectral resolution to permit the study of ultraviolet (UV) line spectra and of spectral energy distributions of early-type stars is studied. Data obtained from this experiment should be of sufficient accuracy to permit detailed physical analysis of individual stars and nebulae, but an even more basic consideration is the expectation of obtaining spectra of a sufficient number of stars so that a statistically meaningful survey may be made of the UV spectra of a wide variety of star types. These should include all luminosity classes of spectral types O, B and A, as well as peculiar stars such as Wolf-Rayet stars and Ap or Am stars. An attempt was also made to obtain, in the no-prism mode, low dispersion UV spectra in a number of Milky Way star fields and in nearby galaxies.

  4. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  5. High Variability in Cellular Stoichiometry of Carbon, Nitrogen, and Phosphorus Within Classes of Marine Eukaryotic Phytoplankton Under Sufficient Nutrient Conditions.

    PubMed

    Garcia, Nathan S; Sexton, Julie; Riggins, Tracey; Brown, Jeff; Lomas, Michael W; Martiny, Adam C

    2018-01-01

    Current hypotheses suggest that cellular elemental stoichiometry of marine eukaryotic phytoplankton such as the ratios of cellular carbon:nitrogen:phosphorus (C:N:P) vary between phylogenetic groups. To investigate how phylogenetic structure, cell volume, growth rate, and temperature interact to affect the cellular elemental stoichiometry of marine eukaryotic phytoplankton, we examined the C:N:P composition in 30 isolates across 7 classes of marine phytoplankton that were grown with a sufficient supply of nutrients and nitrate as the nitrogen source. The isolates covered a wide range in cell volume (5 orders of magnitude), growth rate (<0.01-0.9 d -1 ), and habitat temperature (2-24°C). Our analysis indicates that C:N:P is highly variable, with statistical model residuals accounting for over half of the total variance and no relationship between phylogeny and elemental stoichiometry. Furthermore, our data indicated that variability in C:P, N:P, and C:N within Bacillariophyceae (diatoms) was as high as that among all of the isolates that we examined. In addition, a linear statistical model identified a positive relationship between diatom cell volume and C:P and N:P. Among all of the isolates that we examined, the statistical model identified temperature as a significant factor, consistent with the temperature-dependent translation efficiency model, but temperature only explained 5% of the total statistical model variance. While some of our results support data from previous field studies, the high variability of elemental ratios within Bacillariophyceae contradicts previous work that suggests that this cosmopolitan group of microalgae has consistently low C:P and N:P ratios in comparison with other groups.

  6. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  7. Regional downscaling of temporal resolution in near-surface wind from statistically downscaled Global Climate Models (GCMs) for use in San Francisco Bay coastal flood modeling

    NASA Astrophysics Data System (ADS)

    O'Neill, A.; Erikson, L. H.; Barnard, P.

    2013-12-01

    While Global Climate Models (GCMs) provide useful projections of near-surface wind vectors into the 21st century, resolution is not sufficient enough for use in regional wave modeling. Statistically downscaled GCM projections from Multivariate Adaptive Constructed Analogues (MACA) provide daily near-surface winds at an appropriate spatial resolution for wave modeling within San Francisco Bay. Using 30 years (1975-2004) of climatological data from four representative stations around San Francisco Bay, a library of example daily wind conditions for four corresponding over-water sub-regions is constructed. Empirical cumulative distribution functions (ECDFs) of station conditions are compared to MACA GFDL hindcasts to create correction factors, which are then applied to 21st century MACA wind projections. For each projection day, a best match example is identified via least squares error among all stations from the library. The best match's daily variation in velocity components (u/v) is used as an analogue of representative wind variation and is applied at 3-hour increments about the corresponding sub-region's projected u/v values. High temporal resolution reconstructions using this methodology on hindcast MACA fields from 1975-2004 accurately recreate extreme wind values within the San Francisco Bay, and because these extremes in wind forcing are of key importance in wave and subsequent coastal flood modeling, this represents a valuable method of generating near-surface wind vectors for use in coastal flood modeling.

  8. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  9. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo.

    PubMed

    Sioen, Giles Bruno; Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-07-10

    Background : Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods : We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo's 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results : The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions : This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted according to the time of year the disaster takes place to meet nutrient requirements in periods of low self-sufficiency and prevent gastrointestinal symptoms and cardiovascular diseases among survivors.

  10. Post-Disaster Food and Nutrition from Urban Agriculture: A Self-Sufficiency Analysis of Nerima Ward, Tokyo

    PubMed Central

    Sekiyama, Makiko; Terada, Toru; Yokohari, Makoto

    2017-01-01

    Background: Post-earthquake studies from around the world have reported that survivors relying on emergency food for prolonged periods of time experienced several dietary related health problems. The present study aimed to quantify the potential nutrient production of urban agricultural vegetables and the resulting nutritional self-sufficiency throughout the year for mitigating post-disaster situations. Methods: We estimated the vegetable production of urban agriculture throughout the year. Two methods were developed to capture the production from professional and hobby farms: Method I utilized secondary governmental data on agricultural production from professional farms, and Method II was based on a supplementary spatial analysis to estimate the production from hobby farms. Next, the weight of produced vegetables [t] was converted into nutrients [kg]. Furthermore, the self-sufficiency by nutrient and time of year was estimated by incorporating the reference consumption of vegetables [kg], recommended dietary allowance of nutrients per capita [mg], and population statistics. The research was conducted in Nerima, the second most populous ward of Tokyo’s 23 special wards. Self-sufficiency rates were calculated with the registered residents. Results: The estimated total vegetable production of 5660 tons was equivalent to a weight-based self-sufficiency rate of 6.18%. The average nutritional self-sufficiencies of Methods I and II were 2.48% and 0.38%, respectively, resulting in an aggregated average of 2.86%. Fluctuations throughout the year were observed according to the harvest seasons of the available crops. Vitamin K (6.15%) had the highest self-sufficiency of selected nutrients, while calcium had the lowest (0.96%). Conclusions: This study suggests that depending on the time of year, urban agriculture has the potential to contribute nutrients to diets during post-disaster situations as disaster preparedness food. Emergency responses should be targeted according to the time of year the disaster takes place to meet nutrient requirements in periods of low self-sufficiency and prevent gastrointestinal symptoms and cardiovascular diseases among survivors. PMID:28698515

  11. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Statistical Analysis of Hubble /WFC3 Transit Spectroscopy of Extrasolar Planets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guangwei; Deming, Drake; Knutson, Heather

    2017-10-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST /WFC3 transit spectra for 1.1–1.65 μ m water vapor absorption and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-transmit code. We express the magnitude ofmore » the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.« less

  13. Statistical Analysis of Hubble/WFC3 Transit Spectroscopy of Extrasolar Planets

    NASA Astrophysics Data System (ADS)

    Fu, Guangwei; Deming, Drake; Knutson, Heather; Madhusudhan, Nikku; Mandell, Avi; Fraine, Jonathan

    2018-01-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST/WFC3 transit spectra for 1.1-1.65 micron water vapor absorption, and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-Transmit code. We express the magnitude of the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.

  14. Statistical Analysis of Hubble/WFC3 Transit Spectroscopy of Extrasolar Planets

    NASA Astrophysics Data System (ADS)

    Fu, Guangwei; Deming, Drake; Knutson, Heather; Madhusudhan, Nikku; Mandell, Avi; Fraine, Jonathan

    2017-10-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST/WFC3 transit spectra for 1.1-1.65 μm water vapor absorption and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-transmit code. We express the magnitude of the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.

  15. Discrete Element Modeling of the Mobilization of Coarse Gravel Beds by Finer Gravel Particles

    NASA Astrophysics Data System (ADS)

    Hill, K. M.; Tan, D.

    2012-12-01

    Recent research has shown that the addition of fine gravel particles to a coarse bed will mobilize the coarser bed, and that the effect is sufficiently strong that a pulse of fine gravel particles can mobilize an impacted coarser bed. Recent flume experiments have demonstrated that the degree of bed mobilization by finer particles is primarily dependent on the particle size ratio of the coarse and fine particles, rather than absolute size of either particle, provided both particles are sufficiently large. However, the mechanism behind the mobilization is not understood. It has previously been proposed that the mechanism is driven by a combination of geometric effects and hydraulic effects. For example, it has been argued that smaller particles fill in gaps along the bed, resulting in a smoother bed over which the larger particles are less likely to be disentrained and a reduced near-bed flow velocity and subsequent increased drag on protruding particles. Altered near-bed turbulence has also been cited as playing an important role. We perform simulations using the discrete element method with one-way fluid-solid coupling to conduct simulations of mobilization of a gravel bed by fine gravel particles. By independently and artificially controlling average and fluctuating velocity profiles, we systematically investigate the relative role that may be played by particle-particle interactions, average near-bed velocity profiles, and near-bed turbulence statistics. The simulations indicate that the relative importance of these mechanisms changes with the degree of mobilization of the bed. For higher bed mobility similar to bed sheets, particle-particle interactions, plays a significant role in an apparent rheology in the bed sheets, not unlike that observed in a dense granular flow of particles of different sizes. For conditions closer to a critical shear stress for bedload transport, the near-bed velocity profiles and turbulence statistics become increasingly important.

  16. Monitoring Single-Molecule Protein Dynamics with a Carbon Nanotube Transistor

    NASA Astrophysics Data System (ADS)

    Collins, Philip G.

    2014-03-01

    Nanoscale electronic devices like field-effect transistors have long promised to provide sensitive, label-free detection of biomolecules. Single-walled carbon nanotubes press this concept further by not just detecting molecules but also monitoring their dynamics in real time. Recent measurements have demonstrated this premise by monitoring the single-molecule processivity of three different enzymes: lysozyme, protein Kinase A, and the Klenow fragment of DNA polymerase I. With all three enzymes, single molecules tethered to nanotube transistors were electronically monitored for 10 or more minutes, allowing us to directly observe a range of activity including rare transitions to chemically inactive and hyperactive conformations. The high bandwidth of the nanotube transistors further allow every individual chemical event to be clearly resolved, providing excellent statistics from tens of thousands of turnovers by a single enzyme. Initial success with three different enzymes indicates the generality and attractiveness of the nanotube devices as a new tool to complement other single-molecule techniques. Research on transduction mechanisms provides the design rules necessary to further generalize this architecture and apply it to other proteins. The purposeful incorporation of just one amino acid is sufficient to fabricate effective, single molecule sensors from a wide range of enzymes or proteins.

  17. Effectiveness of cognitive rehabilitation following acquired brain injury: a meta-analytic re-examination of Cicerone et al.'s (2000, 2005) systematic reviews.

    PubMed

    Rohling, Martin L; Faust, Mark E; Beverly, Brenda; Demakis, George

    2009-01-01

    The present study provides a meta-analysis of cognitive rehabilitation literature (K = 115, N = 2,014) that was originally reviewed by K. D. Cicerone et al. (2000, 2005) for the purpose of providing evidence-based practice guidelines for persons with acquired brain injury. The analysis yielded a small treatment effect size (ES = .30, d(+) statistic) directly attributable to cognitive rehabilitation. A larger treatment effect (ES = .71) was found for single-group pretest to posttest outcomes; however, modest improvement was observed for nontreatment control groups as well (ES = .41). Correction for this effect, which was not attributable to cognitive treatments, resulted in the small, but significant, overall estimate. Treatment effects were moderated by cognitive domain treated, time postinjury, type of brain injury, and age. The meta-analysis revealed sufficient evidence for the effectiveness of attention training after traumatic brain injury and of language and visuospatial training for aphasia and neglect syndromes after stroke. Results provide important quantitative documentation of effective treatments, complementing recent systematic reviews. Findings also highlight gaps in the scientific evidence supporting cognitive rehabilitation, thereby indicating future research directions. (c) 2009 APA, all rights reserved.

  18. Classicality condition on a system observable in a quantum measurement and a relative-entropy conservation law

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui; Ueda, Masahito

    2015-03-01

    We consider the information flow on a system observable X corresponding to a positive-operator-valued measure under a quantum measurement process Y described by a completely positive instrument from the viewpoint of the relative entropy. We establish a sufficient condition for the relative-entropy conservation law which states that the average decrease in the relative entropy of the system observable X equals the relative entropy of the measurement outcome of Y , i.e., the information gain due to measurement. This sufficient condition is interpreted as an assumption of classicality in the sense that there exists a sufficient statistic in a joint successive measurement of Y followed by X such that the probability distribution of the statistic coincides with that of a single measurement of X for the premeasurement state. We show that in the case when X is a discrete projection-valued measure and Y is discrete, the classicality condition is equivalent to the relative-entropy conservation for arbitrary states. The general theory on the relative-entropy conservation is applied to typical quantum measurement models, namely, quantum nondemolition measurement, destructive sharp measurements on two-level systems, a photon counting, a quantum counting, homodyne and heterodyne measurements. These examples except for the nondemolition and photon-counting measurements do not satisfy the known Shannon-entropy conservation law proposed by Ban [M. Ban, J. Phys. A: Math. Gen. 32, 1643 (1999), 10.1088/0305-4470/32/9/012], implying that our approach based on the relative entropy is applicable to a wider class of quantum measurements.

  19. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.

  20. Benchmarking the mesoscale variability in global ocean eddy-permitting numerical systems

    NASA Astrophysics Data System (ADS)

    Cipollone, Andrea; Masina, Simona; Storto, Andrea; Iovino, Doroteaciro

    2017-10-01

    The role of data assimilation procedures on representing ocean mesoscale variability is assessed by applying eddy statistics to a state-of-the-art global ocean reanalysis (C-GLORS), a free global ocean simulation (performed with the NEMO system) and an observation-based dataset (ARMOR3D) used as an independent benchmark. Numerical results are computed on a 1/4 ∘ horizontal grid (ORCA025) and share the same resolution with ARMOR3D dataset. This "eddy-permitting" resolution is sufficient to allow ocean eddies to form. Further to assessing the eddy statistics from three different datasets, a global three-dimensional eddy detection system is implemented in order to bypass the need of regional-dependent definition of thresholds, typical of commonly adopted eddy detection algorithms. It thus provides full three-dimensional eddy statistics segmenting vertical profiles from local rotational velocities. This criterion is crucial for discerning real eddies from transient surface noise that inevitably affects any two-dimensional algorithm. Data assimilation enhances and corrects mesoscale variability on a wide range of features that cannot be well reproduced otherwise. The free simulation fairly reproduces eddies emerging from western boundary currents and deep baroclinic instabilities, while underestimates shallower vortexes that populate the full basin. The ocean reanalysis recovers most of the missing turbulence, shown by satellite products , that is not generated by the model itself and consistently projects surface variability deep into the water column. The comparison with the statistically reconstructed vertical profiles from ARMOR3D show that ocean data assimilation is able to embed variability into the model dynamics, constraining eddies with in situ and altimetry observation and generating them consistently with local environment.

  1. DOA-informed source extraction in the presence of competing talkers and background noise

    NASA Astrophysics Data System (ADS)

    Taseska, Maja; Habets, Emanuël A. P.

    2017-12-01

    A desired speech signal in hands-free communication systems is often degraded by noise and interfering speech. Even though the number and locations of the interferers are often unknown in practice, it is justified to assume in certain applications that the direction-of-arrival (DOA) of the desired source is approximately known. Using the known DOA, fixed spatial filters such as the delay-and-sum beamformer can be steered to extract the desired source. However, it is well-known that fixed data-independent spatial filters do not provide sufficient reduction of directional interferers. Instead, the DOA information can be used to estimate the statistics of the desired and the undesired signals and to compute optimal data-dependent spatial filters. One way the DOA is exploited for optimal spatial filtering in the literature, is by designing DOA-based narrowband detectors to determine whether a desired or an undesired signal is dominant at each time-frequency (TF) bin. Subsequently, the statistics of the desired and the undesired signals can be estimated during the TF bins where the respective signal is dominant. In a similar manner, a Gaussian signal model-based detector which does not incorporate DOA information has been used in scenarios where the undesired signal consists of stationary background noise. However, when the undesired signal is non-stationary, resulting for example from interfering speakers, such a Gaussian signal model-based detector is unable to robustly distinguish desired from undesired speech. To this end, we propose a DOA model-based detector to determine the dominant source at each TF bin and estimate the desired and undesired signal statistics. We demonstrate that data-dependent spatial filters that use the statistics estimated by the proposed framework achieve very good undesired signal reduction, even when using only three microphones.

  2. Does Reviewing Lead to Better Learning and Decision Making? Answers from a Randomized Stock Market Experiment

    PubMed Central

    Wessa, Patrick; Holliday, Ian E.

    2012-01-01

    Background The literature is not univocal about the effects of Peer Review (PR) within the context of constructivist learning. Due to the predominant focus on using PR as an assessment tool, rather than a constructivist learning activity, and because most studies implicitly assume that the benefits of PR are limited to the reviewee, little is known about the effects upon students who are required to review their peers. Much of the theoretical debate in the literature is focused on explaining how and why constructivist learning is beneficial. At the same time these discussions are marked by an underlying presupposition of a causal relationship between reviewing and deep learning. Objectives The purpose of the study is to investigate whether the writing of PR feedback causes students to benefit in terms of: perceived utility about statistics, actual use of statistics, better understanding of statistical concepts and associated methods, changed attitudes towards market risks, and outcomes of decisions that were made. Methods We conducted a randomized experiment, assigning students randomly to receive PR or non–PR treatments and used two cohorts with a different time span. The paper discusses the experimental design and all the software components that we used to support the learning process: Reproducible Computing technology which allows students to reproduce or re–use statistical results from peers, Collaborative PR, and an AI–enhanced Stock Market Engine. Results The results establish that the writing of PR feedback messages causes students to experience benefits in terms of Behavior, Non–Rote Learning, and Attitudes, provided the sequence of PR activities are maintained for a period that is sufficiently long. PMID:22666385

  3. Methods for treating a metathesis feedstock with metal alkoxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Steven A.; Anderson, Donde R.; Wang, Zhe

    Various methods are provided for treating and reacting a metathesis feedstock. In one embodiment, the method includes providing a feedstock comprising a natural oil, chemically treating the feedstock with a metal alkoxide under conditions sufficient to diminish catalyst poisons in the feedstock, and, following the treating, combining a metathesis catalyst with the feedstock under conditions sufficient to metathesize the feedstock.

  4. Annual modulation of seismicity along the San Andreas Fault near Parkfield, CA

    USGS Publications Warehouse

    Christiansen, L.B.; Hurwitz, S.; Ingebritsen, S.E.

    2007-01-01

    We analyze seismic data from the San Andreas Fault (SAF) near Parkfield, California, to test for annual modulation in seismicity rates. We use statistical analyses to show that seismicity is modulated with an annual period in the creeping section of the fault and a semiannual period in the locked section of the fault. Although the exact mechanism for seasonal triggering is undetermined, it appears that stresses associated with the hydrologic cycle are sufficient to fracture critically stressed rocks either through pore-pressure diffusion or crustal loading/ unloading. These results shed additional light on the state of stress along the SAF, indicating that hydrologically induced stress perturbations of ???2 kPa may be sufficient to trigger earthquakes.

  5. Method of using deuterium-cluster foils for an intense pulsed neutron source

    DOEpatents

    Miley, George H.; Yang, Xiaoling

    2013-09-03

    A method is provided for producing neutrons, comprising: providing a converter foil comprising deuterium clusters; focusing a laser on the foil with power and energy sufficient to cause deuteron ions to separate from the foil; and striking a surface of a target with the deuteron ions from the converter foil with energy sufficient to cause neutron production by a reaction selected from the group consisting of D-D fusion, D-T fusion, D-metal nuclear spallation, and p-metal. A further method is provided for assembling a plurality of target assemblies for a target injector to be used in the previously mentioned manner. A further method is provided for producing neutrons, comprising: splitting a laser beam into a first beam and a second beam; striking a first surface of a target with the first beam, and an opposite second surface of the target with the second beam with energy sufficient to cause neutron production.

  6. Satellite-Derived Sea Surface Temperature: Workshop 3

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This is the third of a series of three workshops, sponsored by the National Aeronautics and Space Administration, to investigate the state of the art in global sea surface temperature measurements from space. Three workshops were necessary to process and analyze sufficient data from which to draw conclusions on the accuracy and reliability of the satellite measurements. In this workshop, the final two (out of a total of four) months of satellite and in situ data chosen for study were processed and evaluated. Results from the AVHRR, HIRS, SMMR, and VAS sensors, in comparison with in situ data from ships, XBTs, and buoys, confirmed satellite rms accuracies in the 0.5 to 1.0 C range, but with variable biases. These accuracies may degrade under adverse conditions for specific sensors. A variety of color maps, plots, and statistical tables are provided for detailed study of the individual sensor SST measurements.

  7. Evaluation of Assimilated SMOS Soil Moisture Data for US Cropland Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Yang, Zhengwei; Sherstha, Ranjay; Crow, Wade; Bolten, John; Mladenova, Iva; Yu, Genong; Di, Liping

    2016-01-01

    Remotely sensed soil moisture data can provide timely, objective and quantitative crop soil moisture information with broad geospatial coverage and sufficiently high resolution observations collected throughout the growing season. This paper evaluates the feasibility of using the assimilated ESA Soil Moisture Ocean Salinity (SMOS)Mission L-band passive microwave data for operational US cropland soil surface moisture monitoring. The assimilated SMOS soil moisture data are first categorized to match with the United States Department of Agriculture (USDA)National Agricultural Statistics Service (NASS) survey based weekly soil moisture observation data, which are ordinal. The categorized assimilated SMOS soil moisture data are compared with NASSs survey-based weekly soil moisture data for consistency and robustness using visual assessment and rank correlation. Preliminary results indicate that the assimilated SMOS soil moisture data highly co-vary with NASS field observations across a large geographic area. Therefore, SMOS data have great potential for US operational cropland soil moisture monitoring.

  8. Conceptual and non-conceptual repetition priming in category exemplar generation: Evidence from bilinguals.

    PubMed

    Francis, Wendy S; Fernandez, Norma P; Bjork, Robert A

    2010-10-01

    One measure of conceptual implicit memory is repetition priming in the generation of exemplars from a semantic category, but does such priming transfer across languages? That is, do the overlapping conceptual representations for translation equivalents provide a sufficient basis for such priming? In Experiment 1 (N=96) participants carried out a deep encoding task, and priming between languages was statistically reliable, but attenuated, relative to within-language priming. Experiment 2 (N=96) replicated the findings of Experiment 1 and assessed the contributions of conceptual and non-conceptual processes using a levels-of-processing manipulation. Words that underwent shallow encoding exhibited within-language, but not between-language, priming. Priming in shallow conditions cannot therefore be explained by incidental activation of the concept. Instead, part of the within-language priming effect, even under deep-encoding conditions, is due to increased availability of language-specific lemmas or phonological word forms.

  9. Conceptual and Non-conceptual Repetition Priming in Category Exemplar Generation: Evidence from Bilinguals

    PubMed Central

    Francis, Wendy S.; Fernandez, Norma P.; Bjork, Robert A.

    2010-01-01

    One measure of conceptual implicit memory is repetition priming in the generation of exemplars from a semantic category, but does such priming transfer across languages? That is, do the overlapping conceptual representations for translation equivalents provide a sufficient basis for such priming? In Experiment 1 (N = 96), participants carried out a deep encoding task, and priming between languages was statistically reliable, but attenuated, relative to within-language priming. Experiment 2 (N = 96) replicated the findings of Experiment 1 and assessed the contributions of conceptual and non-conceptual processes using a levels-of-processing manipulation. Words that underwent shallow encoding exhibited within-language, but not between-language, priming. Priming in shallow conditions cannot, therefore, be explained by incidental activation of the concept. Instead, part of the within-language priming effect, even under deep-encoding conditions, is due to increased availability of language-specific lemmas or phonological word forms. PMID:20924951

  10. Towards real-time metabolic profiling of a biopsy specimen during a surgical operation by 1H high resolution magic angle spinning nuclear magnetic resonance: a case report.

    PubMed

    Piotto, Martial; Moussallieh, François-Marie; Neuville, Agnès; Bellocq, Jean-Pierre; Elbayed, Karim; Namer, Izzie Jacques

    2012-01-18

    Providing information on cancerous tissue samples during a surgical operation can help surgeons delineate the limits of a tumoral invasion more reliably. Here, we describe the use of metabolic profiling of a colon biopsy specimen by high resolution magic angle spinning nuclear magnetic resonance spectroscopy to evaluate tumoral invasion during a simulated surgical operation. Biopsy specimens (n = 9) originating from the excised right colon of a 66-year-old Caucasian women with an adenocarcinoma were automatically analyzed using a previously built statistical model. Metabolic profiling results were in full agreement with those of a histopathological analysis. The time-response of the technique is sufficiently fast for it to be used effectively during a real operation (17 min/sample). Metabolic profiling has the potential to become a method to rapidly characterize cancerous biopsies in the operation theater.

  11. Fast emulation of track reconstruction in the CMS simulation

    NASA Astrophysics Data System (ADS)

    Komm, Matthias; CMS Collaboration

    2017-10-01

    Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation while retaining a sufficient, tuneable accuracy.

  12. A Variational Statistical-Field Theory for Polar Liquid Mixtures

    NASA Astrophysics Data System (ADS)

    Zhuang, Bilin; Wang, Zhen-Gang

    Using a variational field-theoretic approach, we derive a molecularly-based theory for polar liquid mixtures. The resulting theory consists of simple algebraic expressions for the free energy of mixing and the dielectric constant as functions of mixture composition. Using only the dielectric constants and the molar volumes of the pure liquid constituents, the theory evaluates the mixture dielectric constants in good agreement with the experimental values for a wide range of liquid mixtures, without using adjustable parameters. In addition, the theory predicts that liquids with similar dielectric constants and molar volumes dissolve well in each other, while sufficient disparity in these parameters result in phase separation. The calculated miscibility map on the dielectric constant-molar volume axes agrees well with known experimental observations for a large number of liquid pairs. Thus the theory provides a quantification for the well-known empirical ``like-dissolves-like'' rule. Bz acknowledges the A-STAR fellowship for the financial support.

  13. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  14. A systematic review of evidence for the effectiveness of practitioner-based complementary and alternative therapies in the management of rheumatic diseases: osteoarthritis.

    PubMed

    Macfarlane, Gary J; Paudyal, Priya; Doherty, Michael; Ernst, Edzard; Lewith, George; MacPherson, Hugh; Sim, Julius; Jones, Gareth T

    2012-12-01

    To critically review the evidence on the efficacy and effectiveness of practitioner-based complementary therapies for patients with osteoarthritis. We excluded t'ai chi and acupuncture, which have been the subject of recent reviews. Randomized controlled trials, published in English up to May 2011, were identified using systematic searches of bibliographic databases and searching of reference lists. Information was extracted on outcomes, statistical significance in comparison with alternative treatments and reported side effects. The methodological quality of the identified studies was determined using the Jadad scoring system. Outcomes considered were pain and patient global assessment. In all, 16 eligible trials were identified covering 12 therapies. Overall, there was no good evidence of the effectiveness of any of the therapies in relation to pain or global health improvement/quality of life because most therapies only had a single randomized controlled trial. Where positive results were reported, they were often comparing an active intervention with no intervention. Therapies with multiple trials either provided null (biofeedback) or inconsistent results (magnet therapy), or the trials available scored poorly for quality (chiropractic). There were few adverse events reported in the trials. There is not sufficient evidence to recommend any of the practitioner-based complementary therapies considered here for the management of OA, but neither is there sufficient evidence to conclude that they are not effective or efficacious.

  15. Factors related to falls among community dwelling elderly.

    PubMed

    Kuhirunyaratn, Piyathida; Prasomrak, Prasert; Jindawong, Bangonsri

    2013-09-01

    Falls among the elderly can lead to disability, hospitalization and premature death. This study aimed to determine the factors related to falls among community dwelling elderly. This case-control study was conducted at the Samlium Primary Care Unit (SPCU), Khon Kaen, Thailand. Cases were elderly individuals who had fallen within the previous six months and controls were elderly who had not fallen during that same time period. Subjects were taken from elderly persons registered at the SPCU. The sample size was calculated to be 111 cases and 222 controls. Face to face interviews were conducted with subjects between May and June, 2011. The response rate was 100%. On bivariate analysis, the statistically significant factors related to falls were: regular medication use, co-morbidities, mobility, depression, cluttered rooms, slippery floors, unsupported toilets (without a hand rail), sufficient exercise, rapid posture change and wearing slippers. When controlling for others significant factors, multiple logistic regression revealed significant factors were: regular medication use (AOR: 2.22; 95%CI: 1.19 - 4.12), depression (AOR: 1.76, 95% CI: 1.03 - 2.99), sufficient exercise (AOR: 0.34; 95% CI: 0.19 - 0.58) and wearing slippery shoes (AOR: 2.31; 95% CI: 1.24 - 4.29). Interventions need to be considered to modify these significant factors associated with falls and education should be provided to these at risk.

  16. Using artificial neural network and satellite data to predict rice yield in Bangladesh

    NASA Astrophysics Data System (ADS)

    Akhand, Kawsar; Nizamuddin, Mohammad; Roytman, Leonid; Kogan, Felix; Goldberg, Mitch

    2015-09-01

    Rice production in Bangladesh is a crucial part of the national economy and providing about 70 percent of an average citizen's total calorie intake. The demand for rice is constantly rising as the new populations are added in every year in Bangladesh. Due to the increase in population, the cultivation land decreases. In addition, Bangladesh is faced with production constraints such as drought, flooding, salinity, lack of irrigation facilities and lack of modern technology. To maintain self sufficiency in rice, Bangladesh will have to continue to expand rice production by increasing yield at a rate that is at least equal to the population growth until the demand of rice has stabilized. Accurate rice yield prediction is one of the most important challenges in managing supply and demand of rice as well as decision making processes. Artificial Neural Network (ANN) is used to construct a model to predict Aus rice yield in Bangladesh. Advanced Very High Resolution Radiometer (AVHRR)-based remote sensing satellite data vegetation health (VH) indices (Vegetation Condition Index (VCI) and Temperature Condition Index (TCI) are used as input variables and official statistics of Aus rice yield is used as target variable for ANN prediction model. The result obtained with ANN method is encouraging and the error of prediction is less than 10%. Therefore, prediction can play an important role in planning and storing of sufficient rice to face in any future uncertainty.

  17. Study of Aerosol-Cloud Interaction from ground-based long-term statistical analysis at SGP

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Qiu, Y.

    2017-12-01

    Previous studies have shown various relationships between cloud droplet effective radius (re) and aerosol amount based on limited observations, indicative of the uncertainties of this relationship caused by many factors. Using 8-year ground-based cloud and aerosol observations at Southern Great Plain (SGP) site in Oklahoma, US, we here analyze the seasonal variation of aerosol effect on low liquid clouds. It shows positive instead of negative AOD-re relationship in all seasons except summer. Potential contribution to AOD-re relationship from the precipitable water vapor (PWV) has been analyzed. Results show that the AOD-re relationship is indeed negative in low PWV condition regardless of seasonality, but it turns positive in high PWV condition for all seasons other than summer. The most likely explanation for the positive AOD-re relationship in high PWV condition for spring, fall and winter is that high PWV could promote the growth of cloud droplets by providing sufficient water vapor. The different performance of AOD-re relationship in summer could be related to the much heavier aerosol loading, which makes the PWV not sufficient any more so that the droplets compete water with each other. By limiting the variation of other meteorological conditions such as low tropospheric stability and wind speed near cloud bases, further analysis also indicates that higher PWVs help change AOD-re relationship from negative to positive.

  18. Accuracy assessment of maps of forest condition: Statistical design and methodological considerations [Chapter 5

    Treesearch

    Raymond L. Czaplewski

    2003-01-01

    No thematic map is perfect. Some pixels or polygons are not accurately classified, no matter how well the map is crafted. Therefore, thematic maps need metadata that sufficiently characterize the nature and degree of these imperfections. To decision-makers, an accuracy assessment helps judge the risks of using imperfect geospatial data. To analysts, an accuracy...

  19. Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter

    DTIC Science & Technology

    2008-05-20

    Machine learning techniques are often used for decision making in security critical applications such as intrusion detection and spam filtering...filter. The defenses shown in this thesis are able to work against the attacks developed against SpamBayes and are sufficiently generic to be easily extended into other statistical machine learning algorithms.

  20. Integration of Advanced Statistical Analysis Tools and Geophysical Modeling

    DTIC Science & Technology

    2012-08-01

    Carin Duke University Douglas Oldenburg University of British Columbia Stephen Billings Leonard Pasion Laurens Beran Sky Research...data processing for UXO discrimination is the time (or frequency) dependent dipole model (Bell and Barrow (2001), Pasion and Oldenburg (2001), Zhang...described by a bimodal distribution (i.e. two Gaussians, see Pasion (2007)). Data features are nonetheless useful when data quality is not sufficient

  1. Technological challenges for hydrocarbon production in the Barents Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gudmestad, O.T.; Strass, P.

    1995-02-01

    Technological challenges for hydrocarbon production in the Barents Sea relate mainly to the climatic conditions (ice and icebergs), to the relatively deep water of the area, and to the distance to the market for transportation of gas. It is suggested that environmental conditions must be carefully mapped over a sufficiently long period to get reliable statistics for the area.

  2. Simultaneous Use of Multiple Answer Copying Indexes to Improve Detection Rates

    ERIC Educational Resources Information Center

    Wollack, James A.

    2006-01-01

    Many of the currently available statistical indexes to detect answer copying lack sufficient power at small [alpha] levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was…

  3. Publication Bias in "Red, Rank, and Romance in Women Viewing Men," by Elliot et al. (2010)

    ERIC Educational Resources Information Center

    Francis, Gregory

    2013-01-01

    Elliot et al. (2010) reported multiple experimental findings that the color red modified women's ratings of attractiveness, sexual desirability, and status of a photographed man. An analysis of the reported statistics of these studies indicates that the experiments lack sufficient power to support these claims. Given the power of the experiments,…

  4. Invariant target detection by a correlation radiometer

    NASA Astrophysics Data System (ADS)

    Murza, L. P.

    1986-12-01

    The paper is concerned with the problem of the optimal detection of a heat-emitting target by a two-channel radiometer with an unstable amplification circuit. An expression is obtained for an asymptotically sufficient detection statistic which is invariant to changes in the amplification coefficients of the channels. The algorithm proposed here can be implemented numerically using a relatively simple program.

  5. Experimental research on mathematical modelling and unconventional control of clinker kiln in cement plants

    NASA Astrophysics Data System (ADS)

    Rusu-Anghel, S.

    2017-01-01

    Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.

  6. Method for producing strain tolerant multifilamentary oxide superconducting wire

    DOEpatents

    Finnemore, D.K.; Miller, T.A.; Ostenson, J.E.; Schwartzkopf, L.A.; Sanders, S.C.

    1994-07-19

    A strain tolerant multifilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments. 6 figs.

  7. Method for producing strain tolerant multifilamentary oxide superconducting wire

    DOEpatents

    Finnemore, Douglas K.; Miller, Theodore A.; Ostenson, Jerome E.; Schwartzkopf, Louis A.; Sanders, Steven C.

    1994-07-19

    A strain tolerant multifilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments.

  8. Statistical machine translation for biomedical text: are we there yet?

    PubMed

    Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre

    2011-01-01

    In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.

  9. Deciphering viscous flow of frictional melts with the mini-AMS method

    NASA Astrophysics Data System (ADS)

    Ferré, Eric C.; Chou, Yu-Min; Kuo, Ruo Lin; Yeh, En-Chao; Leibovitz, Natalie R.; Meado, Andrea L.; Campbell, Lucy; Geissman, John W.

    2016-09-01

    The anisotropy of magnetic susceptibility (AMS) is widely used to analyze magmatic flow in intrusive igneous bodies including plutons, sills and dikes. This method, owing its success to the rapid nature of measurements, provides a proxy for the orientation of markers with shape anisotropy that flow and align in a viscous medium. AMS specimens typically are 25 mm diameter right cylinders or 20 mm on-a-side cubes, representing a volume deemed statistically representative. Here, we present new AMS results, based on significantly smaller cubic specimens, which are 3.5 mm on a side, hence∼250 times volumetrically smaller than conventional specimens. We show that, in the case of frictional melts, which inherently have an extremely small grain size, this small volume is in most cases sufficient to characterize the pseudotachylyte fabric, particularly when magnetite is present. Further, we demonstrate that the mini-AMS method provides new opportunities to investigate the details of frictional melt flow in these coseismic miniature melt bodies. This new method offers significant potential to investigate frictional melt flow in pseudotachylyte veins including contributions to the lubrication of faults at shallow to moderate depths.

  10. EviNet: a web platform for network enrichment analysis with flexible definition of gene sets.

    PubMed

    Jeggari, Ashwini; Alekseenko, Zhanna; Petrov, Iurii; Dias, José M; Ericson, Johan; Alexeyenko, Andrey

    2018-06-09

    The new web resource EviNet provides an easily run interface to network enrichment analysis for exploration of novel, experimentally defined gene sets. The major advantages of this analysis are (i) applicability to any genes found in the global network rather than only to those with pathway/ontology term annotations, (ii) ability to connect genes via different molecular mechanisms rather than within one high-throughput platform, and (iii) statistical power sufficient to detect enrichment of very small sets, down to individual genes. The users' gene sets are either defined prior to upload or derived interactively from an uploaded file by differential expression criteria. The pathways and networks used in the analysis can be chosen from the collection menu. The calculation is typically done within seconds or minutes and the stable URL is provided immediately. The results are presented in both visual (network graphs) and tabular formats using jQuery libraries. Uploaded data and analysis results are kept in separated project directories not accessible by other users. EviNet is available at https://www.evinet.org/.

  11. Investigation of albumin-derived perfluorocarbon-based capsules by holographic optical trapping

    PubMed Central

    Köhler, Jannis; Ruschke, Jegor; Ferenz, Katja Bettina; Esen, Cemal; Kirsch, Michael; Ostendorf, Andreas

    2018-01-01

    Albumin-derived perfluorocarbon-based capsules are promising as artificial oxygen carriers with high solubility. However, these capsules have to be studied further to allow initial human clinical tests. The aim of this paper is to provide and characterize a holographic optical tweezer to enable contactless trapping and moving of individual capsules in an environment that mimics physiological (in vivo) conditions most effectively in order to learn more about the artificial oxygen carrier behavior in blood plasma without recourse to animal experiments. Therefore, the motion behavior of capsules in a ring shaped or vortex beam is analyzed and optimized on account of determination of the optical forces in radial and axial direction. In addition, due to the customization and generation of dynamic phase holograms, the optical tweezer is used for first investigations on the aggregation behavior of the capsules and a statistical evaluation of the bonding in dependency of different capsule sizes is performed. The results show that the optical tweezer is sufficient for studying individual perfluorocarbon-based capsules and provide information about the interaction of these capsules for future use as artificial oxygen carriers. PMID:29552409

  12. Effects of different preservation methods on inter simple sequence repeat (ISSR) and random amplified polymorphic DNA (RAPD) molecular markers in botanic samples.

    PubMed

    Wang, Xiaolong; Li, Lin; Zhao, Jiaxin; Li, Fangliang; Guo, Wei; Chen, Xia

    2017-04-01

    To evaluate the effects of different preservation methods (stored in a -20°C ice chest, preserved in liquid nitrogen and dried in silica gel) on inter simple sequence repeat (ISSR) or random amplified polymorphic DNA (RAPD) analyses in various botanical specimens (including broad-leaved plants, needle-leaved plants and succulent plants) for different times (three weeks and three years), we used a statistical analysis based on the number of bands, genetic index and cluster analysis. The results demonstrate that methods used to preserve samples can provide sufficient amounts of genomic DNA for ISSR and RAPD analyses; however, the effect of different preservation methods on these analyses vary significantly, and the preservation time has little effect on these analyses. Our results provide a reference for researchers to select the most suitable preservation method depending on their study subject for the analysis of molecular markers based on genomic DNA. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.

  13. Cupriavidus metallidurans biomineralization ability and its application as a bioconsolidation enhancer for ornamental marble stone.

    PubMed

    Daskalakis, Markos I; Magoulas, Antonis; Kotoulas, Georgios; Katsikis, Ioannis; Bakolas, Asterios; Karageorgis, Aristomenis P; Mavridou, Athena; Doulia, Danae; Rigas, Fotis

    2014-08-01

    Bacterially induced calcium carbonate precipitation of a Cupriavidus metallidurans isolate was investigated to develop an environmentally friendly method for restoration and preservation of ornamental stones. Biomineralization performance was carried out in a growth medium via a Design of Experiments (DoE) approach using, as design factors, the temperature, growth medium concentration, and inoculum concentration. The optimum conditions were determined with the aid of consecutive experiments based on response surface methodology (RSM) and were successfully validated thereafter. Statistical analysis can be utilized as a tool for screening bacterial bioprecipitation as it considerably reduced the experimental time and effort needed for bacterial evaluation. Analytical methods provided an insight to the biomineral characteristics, and sonication tests proved that our isolate could create a solid new layer of vaterite on marble substrate withstanding sonication forces. C. metallidurans ACA-DC 4073 provided a compact vaterite layer on the marble substrate with morphological characteristics that assisted in its differentiation. The latter proved valuable during spraying minimum amount of inoculated media on marble substrate under conditions close to an in situ application. A sufficient and clearly distinguishable layer was identified.

  14. Perception, Attitude and Instructional Preferences on Physics in High School Students: An Exploration in an International Setting

    NASA Astrophysics Data System (ADS)

    Narayanan, Mini; Gafoor, Abdul

    Questionnaire survey explored perception, attitude and instructional preferences with respect to gender and nationality in high school students of India and USA, a sample of 1101 Indian and 458 US students. Descriptive Statistics techniques were adopted for analysis. Male and female students in USA were at the high and low ends of the spectrum, respectively, in perception and attitude. Preference on instructional strategies was found to be independent of nationality, exposed strategies, opting science, class size and facilities. Responses from both countries indicate preference for an integrated instructional strategy that has strong teacher involvement in a student-centered framework. A thoughtful and properly designed instructional strategy could provide sufficient elements in modifying students' epistemological beliefs. Understanding the nature and process of physics along with a better learning outcome is usually not possible by administering student-centered or teacher-centered strategies alone in their purest form. This study provides adequate support in obtaining two equally significant but contrasting goals in Physics Education Research, to gain conceptual development with increased interest and attainment in learners, through integration.

  15. Wide dynamic range enrichment method of semiconducting single-walled carbon nanotubes with weak field centrifugation

    NASA Astrophysics Data System (ADS)

    Reis, Wieland G.; Tomović, Željko; Weitz, R. Thomas; Krupke, Ralph; Mikhael, Jules

    2017-03-01

    The potential of single-walled carbon nanotubes (SWCNTs) to outperform silicon in electronic application was finally enabled through selective separation of semiconducting nanotubes from the as-synthesized statistical mix with polymeric dispersants. Such separation methods provide typically high semiconducting purity samples with narrow diameter distribution, i.e. almost single chiralities. But for a wide range of applications high purity mixtures of small and large diameters are sufficient or even required. Here we proof that weak field centrifugation is a diameter independent method for enrichment of semiconducting nanotubes. We show that the non-selective and strong adsorption of polyarylether dispersants on nanostructured carbon surfaces enables simple separation of diverse raw materials with different SWCNT diameter. In addition and for the first time, we demonstrate that increased temperature enables higher purity separation. Furthermore we show that the mode of action behind this electronic enrichment is strongly connected to both colloidal stability and protonation. By giving simple access to electronically sorted SWCNTs of any diameter, the wide dynamic range of weak field centrifugation can provide economical relevance to SWCNTs.

  16. Review of human hair optical properties in possible relation to melanoma development.

    PubMed

    Huang, Xiyong; Protheroe, Michael D; Al-Jumaily, Ahmed M; Paul, Sharad P; Chalmers, Andrew N

    2018-05-01

    Immigration and epidemiological studies provide evidence indicating the correlation of high ultraviolet exposure during childhood and increased risks of melanoma in later life. While the explanation of this phenomenon has not been found in the skin, a class of hair has been hypothesized to be involved in this process by transmitting sufficient ultraviolet rays along the hair shaft to possibly cause damage to the stem cells in the hair follicle, ultimately resulting in melanoma in later life. First, the anatomy of hair and its possible contribution to melanoma development, and the tissue optical properties are briefly introduced to provide the necessary background. This paper emphasizes on the review of the experimental studies of the optical properties of human hair, which include the sample preparation, measurement techniques, results, and statistical analysis. The Monte Carlo photon simulation of human hair is next outlined. Finally, current knowledge of the optical studies of hair is discussed in the light of their possible contribution to melanoma development; the necessary future work needed to support this hypothesis is suggested. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany

    This presentation provides an overview of new and ongoing NREL research that aims to improve our understanding of reliability and revenue sufficiency challenges through modeling tools within a markets framework.

  18. Method of dehalogenation using diamonds

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Ladner, Edward P.; Anderson, Richard R.

    2000-01-01

    A method for preparing olefins and halogenated olefins is provided comprising contacting halogenated compounds with diamonds for a sufficient time and at a sufficient temperature to convert the halogenated compounds to olefins and halogenated olefins via elimination reactions.

  19. Effectiveness of knowledge translation interventions to improve cancer pain management.

    PubMed

    Cummings, Greta G; Olivo, Susan Armijo; Biondo, Patricia D; Stiles, Carla R; Yurtseven, Ozden; Fainsinger, Robin L; Hagen, Neil A

    2011-05-01

    Cancer pain is prevalent, yet patients do not receive best care despite widely available evidence. Although national cancer control policies call for education, effectiveness of such programs is unclear and best practices are not well defined. To examine existing evidence on whether knowledge translation (KT) interventions targeting health care providers, patients, and caregivers improve cancer pain outcomes. A systematic review and meta-analysis were undertaken to evaluate primary studies that examined effects of KT interventions on providers and patients. Twenty-six studies met the inclusion criteria. Five studies reported interventions targeting health care providers, four focused on patients or their families, one study examined patients and their significant others, and 16 studies examined patients only. Seven quantitative comparisons measured the statistical effects of interventions. A significant difference favoring the treatment group in least pain intensity (95% confidence interval [CI]: 0.44, 1.42) and in usual pain/average pain (95% CI: 0.13, 0.74) was observed. No other statistical differences were observed. However, most studies were assessed as having high risk of bias and failed to report sufficient information about the intervention dose, quality of educational material, fidelity, and other key factors required to evaluate effectiveness of intervention design. Trials that used a higher dose of KT intervention (characterized by extensive follow-up, comprehensive educational program, and higher resource allocation) were significantly more likely to have positive results than trials that did not use this approach. Further attention to methodological issues to improve educational interventions and research to clarify factors that lead to better pain control are urgently needed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  20. DNA Commission of the International Society for Forensic Genetics: Recommendations on the validation of software programs performing biostatistical calculations for forensic genetics applications.

    PubMed

    Coble, M D; Buckleton, J; Butler, J M; Egeland, T; Fimmers, R; Gill, P; Gusmão, L; Guttman, B; Krawczak, M; Morling, N; Parson, W; Pinto, N; Schneider, P M; Sherry, S T; Willuweit, S; Prinz, M

    2016-11-01

    The use of biostatistical software programs to assist in data interpretation and calculate likelihood ratios is essential to forensic geneticists and part of the daily case work flow for both kinship and DNA identification laboratories. Previous recommendations issued by the DNA Commission of the International Society for Forensic Genetics (ISFG) covered the application of bio-statistical evaluations for STR typing results in identification and kinship cases, and this is now being expanded to provide best practices regarding validation and verification of the software required for these calculations. With larger multiplexes, more complex mixtures, and increasing requests for extended family testing, laboratories are relying more than ever on specific software solutions and sufficient validation, training and extensive documentation are of upmost importance. Here, we present recommendations for the minimum requirements to validate bio-statistical software to be used in forensic genetics. We distinguish between developmental validation and the responsibilities of the software developer or provider, and the internal validation studies to be performed by the end user. Recommendations for the software provider address, for example, the documentation of the underlying models used by the software, validation data expectations, version control, implementation and training support, as well as continuity and user notifications. For the internal validations the recommendations include: creating a validation plan, requirements for the range of samples to be tested, Standard Operating Procedure development, and internal laboratory training and education. To ensure that all laboratories have access to a wide range of samples for validation and training purposes the ISFG DNA commission encourages collaborative studies and public repositories of STR typing results. Published by Elsevier Ireland Ltd.

  1. The Scientific Method, Diagnostic Bayes, and How to Detect Epistemic Errors

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2015-12-01

    In the past decades, Bayesian methods have found widespread application and use in environmental systems modeling. Bayes theorem states that the posterior probability, P(H|D) of a hypothesis, H is proportional to the product of the prior probability, P(H) of this hypothesis and the likelihood, L(H|hat{D}) of the same hypothesis given the new/incoming observations, \\hat {D}. In science and engineering, H often constitutes some numerical simulation model, D = F(x,.) which summarizes using algebraic, empirical, and differential equations, state variables and fluxes, all our theoretical and/or practical knowledge of the system of interest, and x are the d unknown parameters which are subject to inference using some data, \\hat {D} of the observed system response. The Bayesian approach is intimately related to the scientific method and uses an iterative cycle of hypothesis formulation (model), experimentation and data collection, and theory/hypothesis refinement to elucidate the rules that govern the natural world. Unfortunately, model refinement has proven to be very difficult in large part because of the poor diagnostic power of residual based likelihood functions tep{gupta2008}. This has inspired te{vrugt2013} to advocate the use of 'likelihood-free' inference using approximate Bayesian computation (ABC). This approach uses one or more summary statistics, S(\\hat {D}) of the original data, \\hat {D} designed ideally to be sensitive only to one particular process in the model. Any mismatch between the observed and simulated summary metrics is then easily linked to a specific model component. A recurrent issue with the application of ABC is self-sufficiency of the summary statistics. In theory, S(.) should contain as much information as the original data itself, yet complex systems rarely admit sufficient statistics. In this article, we propose to combine the ideas of ABC and regular Bayesian inference to guarantee that no information is lost in diagnostic model evaluation. This hybrid approach, coined diagnostic Bayes, uses the summary metrics as prior distribution and original data in the likelihood function, or P(x|\\hat {D}) ∝ P(x|S(\\hat {D})) L(x|\\hat {D}). A case study illustrates the ability of the proposed methodology to diagnose epistemic errors and provide guidance on model refinement.

  2. Evaluation of errors in quantitative determination of asbestos in rock

    NASA Astrophysics Data System (ADS)

    Baietto, Oliviero; Marini, Paola; Vitaliti, Martina

    2016-04-01

    The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.

  3. Reliability of the ECHOWS Tool for Assessment of Patient Interviewing Skills.

    PubMed

    Boissonnault, Jill S; Evans, Kerrie; Tuttle, Neil; Hetzel, Scott J; Boissonnault, William G

    2016-04-01

    History taking is an important component of patient/client management. Assessment of student history-taking competency can be achieved via a standardized tool. The ECHOWS tool has been shown to be valid with modest intrarater reliability in a previous study but did not demonstrate sufficient power to definitively prove its stability. The purposes of this study were: (1) to assess the reliability of the ECHOWS tool for student assessment of patient interviewing skills and (2) to determine whether the tool discerns between novice and experienced skill levels. A reliability and construct validity assessment was conducted. Three faculty members from the United States and Australia scored videotaped histories from standardized patients taken by students and experienced clinicians from each of these countries. The tapes were scored twice, 3 to 6 weeks apart. Reliability was assessed using interclass correlation coefficients (ICCs) and repeated measures. Analysis of variance models assessed the ability of the tool to discern between novice and experienced skill levels. The ECHOWS tool showed excellent intrarater reliability (ICC [3,1]=.74-.89) and good interrater reliability (ICC [2,1]=.55) as a whole. The summary of performance (S) section showed poor interrater reliability (ICC [2,1]=.27). There was no statistical difference in performance on the tool between novice and experienced clinicians. A possible ceiling effect may occur when standardized patients are not coached to provide complex and obtuse responses to interviewer questions. Variation in familiarity with the ECHOWS tool and in use of the online training may have influenced scoring of the S section. The ECHOWS tool demonstrates excellent intrarater reliability and moderate interrater reliability. Sufficient training with the tool prior to student assessment is recommended. The S section must evolve in order to provide a more discerning measure of interviewing skills. © 2016 American Physical Therapy Association.

  4. Death and rebirth of neural activity in sparse inhibitory networks

    NASA Astrophysics Data System (ADS)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  5. Educational quality and the crisis of educational research

    NASA Astrophysics Data System (ADS)

    Heyneman, Stephen

    1993-11-01

    This paper was designed not as a research product but as a speech to comparative education colleagues. It argues that there is a crisis of educational quality in many parts of the world, and that there is a parallel crisis in the quality of educational research and statistics. Compared to other major public responsibilities in health, agriculture, population and family planning, educational statistics are poor and often getting worse. Our international and national statistical institutions are impoverished, and we as a profession have been part of the problem. We have been so busy arguing over differing research paradigms that we have not paid sufficient attention to our common professional responsibilities and common professional goals. The paper suggests that we, as professionals interested in comparative education issues, begin to act together more on these common and important issues.

  6. Discovering human germ cell mutagens with whole genome sequencing: Insights from power calculations reveal the importance of controlling for between-family variability.

    PubMed

    Webster, R J; Williams, A; Marchetti, F; Yauk, C L

    2018-07-01

    Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  7. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  8. Lagrangian statistics of turbulent dispersion from 81923 direct numerical simulation of isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Buaria, Dhawal; Yeung, P. K.; Sawford, B. L.

    2016-11-01

    An efficient massively parallel algorithm has allowed us to obtain the trajectories of 300 million fluid particles in an 81923 simulation of isotropic turbulence at Taylor-scale Reynolds number 1300. Conditional single-particle statistics are used to investigate the effect of extreme events in dissipation and enstrophy on turbulent dispersion. The statistics of pairs and tetrads, both forward and backward in time, are obtained via post-processing of single-particle trajectories. For tetrads, since memory of shape is known to be short, we focus, for convenience, on samples which are initially regular, with all sides of comparable length. The statistics of tetrad size show similar behavior as the two-particle relative dispersion, i.e., stronger backward dispersion at intermediate times with larger backward Richardson constant. In contrast, the statistics of tetrad shape show more robust inertial range scaling, in both forward and backward frames. However, the distortion of shape is stronger for backward dispersion. Our results suggest that the Reynolds number reached in this work is sufficient to settle some long-standing questions concerning Lagrangian scale similarity. Supported by NSF Grants CBET-1235906 and ACI-1036170.

  9. Conformity and statistical tolerancing

    NASA Astrophysics Data System (ADS)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  10. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  12. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  13. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  14. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.

    PubMed

    Harrar, Solomon W; Kong, Xiaoli

    2015-03-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.

  15. Electronic medical record system at an opioid agonist treatment programme: study design, pre-implementation results and post-implementation trends.

    PubMed

    Kritz, Steven; Brown, Lawrence S; Chu, Melissa; John-Hull, Carlota; Madray, Charles; Zavala, Roberto; Louie, Ben

    2012-08-01

    Electronic medical record (EMR) systems are commonly included in health care reform discussions. However, their embrace by the health care community has been slow. At Addiction Research and Treatment Corporation, an outpatient opioid agonist treatment programme that also provides primary medical care, HIV medical care and case management, substance abuse counselling and vocational services, we studied the implementation of an EMR in the domains of quality, productivity, satisfaction, risk management and financial performance utilizing a prospective pre- and post-implementation study design. This report details the research approach, pre-implementation findings for all five domains, analysis of the pre-implementation findings and some preliminary post-implementation results in the domains of quality and risk management. For quality, there was a highly statistically significant improvement in timely performance of annual medical assessments (P < 0.001) and annual multidiscipline assessments (P < 0.0001). For risk management, the number of events was not sufficient to perform valid statistical analysis. The preliminary findings in the domain of quality are very promising. Should the findings in the other domains prove to be positive, then the impetus to implement EMR in similar health care facilities will be advanced. © 2011 Blackwell Publishing Ltd.

  16. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    PubMed Central

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  17. Metrology Optical Power Budgeting in SIM Using Statistical Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M

    2008-01-01

    The Space Interferometry Mission (SIM) is a space-based stellar interferometry instrument, consisting of up to three interferometers, which will be capable of micro-arc second resolution. Alignment knowledge of the three interferometer baselines requires a three-dimensional, 14-leg truss with each leg being monitored by an external metrology gauge. In addition, each of the three interferometers requires an internal metrology gauge to monitor the optical path length differences between the two sides. Both external and internal metrology gauges are interferometry based, operating at a wavelength of 1319 nanometers. Each gauge has fiber inputs delivering measurement and local oscillator (LO) power, split into probe-LO and reference-LO beam pairs. These beams experience power loss due to a variety of mechanisms including, but not restricted to, design efficiency, material attenuation, element misalignment, diffraction, and coupling efficiency. Since the attenuation due to these sources may degrade over time, an accounting of the range of expected attenuation is needed so an optical power margin can be book kept. A method of statistical optical power analysis and budgeting, based on a technique developed for deep space RF telecommunications, is described in this paper and provides a numerical confidence level for having sufficient optical power relative to mission metrology performance requirements.

  18. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less

  19. Statistical physics and physiology: monofractal and multifractal approaches

    NASA Technical Reports Server (NTRS)

    Stanley, H. E.; Amaral, L. A.; Goldberger, A. L.; Havlin, S.; Peng, C. K.

    1999-01-01

    Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects.

  20. Ever Enrolled Medicare Population Estimates from the MCBS Access to Care Files

    PubMed Central

    Petroski, Jason; Ferraro, David; Chu, Adam

    2014-01-01

    Objective The Medicare Current Beneficiary Survey’s (MCBS) Access to Care (ATC) file is designed to provide timely access to information on the Medicare population, yet because of the survey’s complex sampling design and expedited processing it is difficult to use the file to make both “always-enrolled” and “ever-enrolled” estimates on the Medicare population. In this study, we describe the ATC file and sample design, and we evaluate and review various alternatives for producing “ever-enrolled” estimates. Methods We created “ever enrolled” estimates for key variables in the MCBS using three separate approaches. We tested differences between the alternative approaches for statistical significance and show the relative magnitude of difference between approaches. Results Even when estimates derived from the different approaches were statistically different, the magnitude of the difference was often sufficiently small so as to result in little practical difference among the alternate approaches. However, when considering more than just the estimation method, there are advantages to using certain approaches over others. Conclusion There are several plausible approaches to achieving “ever-enrolled” estimates in the MCBS ATC file; however, the most straightforward approach appears to be implementation and usage of a new set of “ever-enrolled” weights for this file. PMID:24991484

  1. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  2. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  3. Late paleozoic fusulinoidean gigantism driven by atmospheric hyperoxia.

    PubMed

    Payne, Jonathan L; Groves, John R; Jost, Adam B; Nguyen, Thienan; Moffitt, Sarah E; Hill, Tessa M; Skotheim, Jan M

    2012-09-01

    Atmospheric hyperoxia, with pO(2) in excess of 30%, has long been hypothesized to account for late Paleozoic (360-250 million years ago) gigantism in numerous higher taxa. However, this hypothesis has not been evaluated statistically because comprehensive size data have not been compiled previously at sufficient temporal resolution to permit quantitative analysis. In this study, we test the hyperoxia-gigantism hypothesis by examining the fossil record of fusulinoidean foraminifers, a dramatic example of protistan gigantism with some individuals exceeding 10 cm in length and exceeding their relatives by six orders of magnitude in biovolume. We assembled and examined comprehensive regional and global, species-level datasets containing 270 and 1823 species, respectively. A statistical model of size evolution forced by atmospheric pO(2) is conclusively favored over alternative models based on random walks or a constant tendency toward size increase. Moreover, the ratios of volume to surface area in the largest fusulinoideans are consistent in magnitude and trend with a mathematical model based on oxygen transport limitation. We further validate the hyperoxia-gigantism model through an examination of modern foraminiferal species living along a measured gradient in oxygen concentration. These findings provide the first quantitative confirmation of a direct connection between Paleozoic gigantism and atmospheric hyperoxia. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  4. Evaluation of the attractive force of different types of new-generation magnetic attachment systems.

    PubMed

    Akin, Hakan; Coskun, M Emre; Akin, E Gulsah; Ozdemir, A Kemal

    2011-03-01

    Rare earth magnets have been used in prosthodontics, but their tendency for corrosion in the oral cavity and insufficient attractive forces limit long-term clinical application. The purpose of this study was to evaluate the attractive force of different types of new-generation magnetic attachment systems. The attractive force of the neodymium-iron-boron (Nd-Fe-B) and samarium-cobalt (Sm-Co) magnetic attachment systems, including closed-field (Hilop and Hicorex) and open-field (Dyna and Steco) systems, was measured in a universal testing machine (n=5). The data were statistically evaluated with 1-way ANOVA and post hoc Tukey-Kramer multiple comparison test (α=.05). The closed-field systems exhibited greater (P<.001) attractive force than the open-field systems. Moreover, there was a statistically significant difference in attractive force between Nd-Fe-B and Sm-Co magnets (P<.001). The strongest attractive force was found with the Hilop system (9.2 N), and the lowest force was found with the Steco system (2.3 N). The new generation of Nd-Fe-B closed-field magnets, along with improved technology, provides sufficient denture retention for clinical application. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  5. The bag-of-frames approach to audio pattern recognition: a sufficient model for urban soundscapes but not for polyphonic music.

    PubMed

    Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François

    2007-08-01

    The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.

  6. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand

    PubMed Central

    Piaseu, Noppawan

    2017-01-01

    BACKGROUND/OBJECTIVES Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. SUBJECTS/METHODS This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. RESULTS The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable (P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. CONCLUSIONS The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools. PMID:28386386

  7. Prevalence and determinants of sufficient fruit and vegetable consumption among primary school children in Nakhon Pathom, Thailand.

    PubMed

    Hong, Seo Ah; Piaseu, Noppawan

    2017-04-01

    Low consumption of fruit and vegetable is frequently viewed as an important contributor to obesity risk. With increasing childhood obesity and relatively low fruit and vegetable consumption among Thai children, there is a need to identify the determinants of the intake to promote fruit and vegetable consumption effectively. This cross-sectional study was conducted at two conveniently selected primary schools in Nakhon Pathom. A total of 609 students (grade 4-6) completed questionnaires on personal and environmental factors. Adequate fruit and vegetable intakes were defined as a minimum of three servings of fruit or vegetable daily, and adequate total intake as at least 6 serves of fruit and vegetable daily. Data were analyzed using descriptive statistics, the chi-square test, and multiple logistic regression. The proportion of children with a sufficient fruit and/or vegetable intakes was low. Covariates of child's personal and environmental factors showed significant associations with sufficient intakes of fruit and/or vegetable ( P < 0.05). Logistic regression analyses showed that the following factors were positively related to sufficient intake of vegetable; lower grade, a positive attitude toward vegetable, and fruit availability at home; and that greater maternal education, a positive child's attitude toward vegetable, and fruit availability at home were significantly associated with sufficient consumption of fruits and total fruit and vegetable intake. The present study showed that personal factors like attitude toward vegetables and socio-environmental factors, such as, greater availability of fruits were significantly associated with sufficient fruit and vegetable consumption. The importance of environmental and personal factors to successful nutrition highlights the importance of involving parents and schools.

  8. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  9. Tibial Bowing and Pseudarthrosis in Neurofibromatosis Type 1

    DTIC Science & Technology

    2015-01-01

    controlling for age and sex was used. However, there were no statistically significant differences between NF1 individuals with and without tibial...Dinorah Friedmann-Morvinski (The Salk Institute) presented a different model of glioblastoma in which tumors were induced from fully differentiated...a driver of Schwann cell tumorigenesis. Induction ofWnt signaling was sufficient to induce a transformed phenotype in human Schwann cells, while

  10. The study of natural reproduction on burned forest areas

    Treesearch

    J. A. Larsen

    1928-01-01

    It is not necessary herein to quote statistics on the areas and values of timberland destroyed each year in the United States. The losses are sufficiently large to attract attention and to present problems in forest management as well as in forest research. The situation is here and every forester must meet it, be he manager or investigator. This paper is an attempt to...

  11. Microstructure-Sensitive HCF and VHCF Simulations (Preprint)

    DTIC Science & Technology

    2012-08-01

    microplasticity ) on driving formation of cracks, either transgranular along slip bands or intergranular due to progressive slip impingement, as shown in Figure...cycles, such as shafts, bearings, and gears, for example, should focus on extreme value statistics of potential sites for microplastic strain...is the absence of microplasticity within grains sufficient to nucleate cracks or to drive growth of micron- scale embryonic cracks within individual

  12. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  13. Low temperature route to uranium nitride

    DOEpatents

    Burrell, Anthony K.; Sattelberger, Alfred P.; Yeamans, Charles; Hartmann, Thomas; Silva, G. W. Chinthaka; Cerefice, Gary; Czerwinski, Kenneth R.

    2009-09-01

    A method of preparing an actinide nitride fuel for nuclear reactors is provided. The method comprises the steps of a) providing at least one actinide oxide and optionally zirconium oxide; b) mixing the oxide with a source of hydrogen fluoride for a period of time and at a temperature sufficient to convert the oxide to a fluoride salt; c) heating the fluoride salt to remove water; d) heating the fluoride salt in a nitrogen atmosphere for a period of time and at a temperature sufficient to convert the fluorides to nitrides; and e) heating the nitrides under vacuum and/or inert atmosphere for a period of time sufficient to convert the nitrides to mononitrides.

  14. Topographic relationships for design rainfalls over Australia

    NASA Astrophysics Data System (ADS)

    Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.

    2016-02-01

    Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.

  15. Single-row modified mason-allen versus double-row arthroscopic rotator cuff repair: a biomechanical and surface area comparison.

    PubMed

    Nelson, Cory O; Sileo, Michael J; Grossman, Mark G; Serra-Hsu, Frederick

    2008-08-01

    The purpose of this study was to compare the time-zero biomechanical strength and the surface area of repair between a single-row modified Mason-Allen rotator cuff repair and a double-row arthroscopic repair. Six matched pairs of sheep infraspinatus tendons were repaired by both techniques. Pressure-sensitive film was used to measure the surface area of repair for each configuration. Specimens were biomechanically tested with cyclic loading from 20 N to 30 N for 20 cycles and were loaded to failure at a rate of 1 mm/s. Failure was defined at 5 mm of gap formation. Double-row suture anchor fixation restored a mean surface area of 258.23 +/- 69.7 mm(2) versus 148.08 +/- 75.5 mm(2) for single-row fixation, a 74% increase (P = .025). Both repairs had statistically similar time-zero biomechanics. There was no statistical difference in peak-to-peak displacement or elongation during cyclic loading. Single-row fixation showed a higher mean load to failure (110.26 +/- 26.4 N) than double-row fixation (108.93 +/- 21.8 N). This was not statistically significant (P = .932). All specimens failed at the suture-tendon interface. Double-row suture anchor fixation restores a greater percentage of the anatomic footprint when compared with a single-row Mason-Allen technique. The time-zero biomechanical strength was not significantly different between the 2 study groups. This study suggests that the 2 factors are independent of each other. Surface area and biomechanical strength of fixation are 2 independent factors in the outcome of rotator cuff repair. Maximizing both factors may increase the likelihood of complete tendon-bone healing and ultimately improve clinical outcomes. For smaller tears, a single-row modified Mason-Allen suture technique may provide sufficient strength, but for large amenable tears, a double row can provide both strength and increased surface area for healing.

  16. Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.

    PubMed

    Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J

    2008-06-18

    Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.

  17. The Influence of Roughness on Gear Surface Fatigue

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy

    2005-01-01

    Gear working surfaces are subjected to repeated rolling and sliding contacts, and often designs require loads sufficient to cause eventual fatigue of the surface. This research provides experimental data and analytical tools to further the understanding of the causal relationship of gear surface roughness to surface fatigue. The research included evaluations and developments of statistical tools for gear fatigue data, experimental evaluation of the surface fatigue lives of superfinished gears with a near-mirror quality, and evaluations of the experiments by analytical methods and surface inspections. Alternative statistical methods were evaluated using Monte Carlo studies leading to a final recommendation to describe gear fatigue data using a Weibull distribution, maximum likelihood estimates of shape and scale parameters, and a presumed zero-valued location parameter. A new method was developed for comparing two datasets by extending the current methods of likelihood-ratio based statistics. The surface fatigue lives of superfinished gears were evaluated by carefully controlled experiments, and it is shown conclusively that superfinishing of gears can provide for significantly greater lives relative to ground gears. The measured life improvement was approximately a factor of five. To assist with application of this finding to products, the experimental condition was evaluated. The fatigue life results were expressed in terms of specific film thickness and shown to be consistent with bearing data. Elastohydrodynamic and stress analyses were completed to relate the stress condition to fatigue. Smooth-surface models do not adequately explain the improved fatigue lives. Based on analyses using a rough surface model, it is concluded that the improved fatigue lives of superfinished gears is due to a reduced rate of near-surface micropitting fatigue processes, not due to any reduced rate of spalling (sub-surface) fatigue processes. To complete the evaluations, surface inspection were completed. The surface topographies of the ground gears changed substantially due to running, but the topographies of the superfinished gears were essentially unchanged with running.

  18. Fabrication of Hyperbranched Block-Statistical Copolymer-Based Prodrug with Dual Sensitivities for Controlled Release.

    PubMed

    Zheng, Luping; Wang, Yunfei; Zhang, Xianshuo; Ma, Liwei; Wang, Baoyan; Ji, Xiangling; Wei, Hua

    2018-01-17

    Dendrimer with hyperbranched structure and multivalent surface is regarded as one of the most promising candidates close to the ideal drug delivery systems, but the clinical translation and scale-up production of dendrimer has been hampered significantly by the synthetic difficulties. Therefore, there is considerable scope for the development of novel hyperbranched polymer that can not only address the drawbacks of dendrimer but maintain its advantages. The reversible addition-fragmentation chain transfer self-condensing vinyl polymerization (RAFT-SCVP) technique has enabled facile preparation of segmented hyperbranched polymer (SHP) by using chain transfer monomer (CTM)-based double-head agent during the past decade. Meanwhile, the design and development of block-statistical copolymers has been proven in our recent studies to be a simple yet effective way to address the extracellular stability vs intracellular high delivery efficacy dilemma. To integrate the advantages of both hyperbranched and block-statistical structures, we herein reported the fabrication of hyperbranched block-statistical copolymer-based prodrug with pH and reduction dual sensitivities using RAFT-SCVP and post-polymerization click coupling. The external homo oligo(ethylene glycol methyl ether methacrylate) (OEGMA) block provides sufficient extracellularly colloidal stability for the nanocarriers by steric hindrance, and the interior OEGMA units incorporated by the statistical copolymerization promote intracellular drug release by facilitating the permeation of GSH and H + for the cleavage of the reduction-responsive disulfide bond and pH-liable carbonate link as well as weakening the hydrophobic encapsulation of drug molecules. The delivery efficacy of the target hyperbranched block-statistical copolymer-based prodrug was evaluated in terms of in vitro drug release and cytotoxicity studies, which confirms both acidic pH and reduction-triggered drug release for inhibiting proliferation of HeLa cells. Interestingly, the simultaneous application of both acidic pH and GSH triggers promoted significantly the cleavage and release of CPT compared to the exertion of single trigger. This study thus developed a facile approach toward hyperbranched polymer-based prodrugs with high therapeutic efficacy for anticancer drug delivery.

  19. A Survey of Logic Formalisms to Support Mishap Analysis

    NASA Technical Reports Server (NTRS)

    Johnson, Chris; Holloway, C. M.

    2003-01-01

    Mishap investigations provide important information about adverse events and near miss incidents. They are intended to help avoid any recurrence of previous failures. Over time, they can also yield statistical information about incident frequencies that helps to detect patterns of failure and can validate risk assessments. However, the increasing complexity of many safety critical systems is posing new challenges for mishap analysis. Similarly, the recognition that many failures have complex, systemic causes has helped to widen the scope of many mishap investigations. These two factors have combined to pose new challenges for the analysis of adverse events. A new generation of formal and semi-formal techniques have been proposed to help investigators address these problems. We introduce the term mishap logics to collectively describe these notations that might be applied to support the analysis of mishaps. The proponents of these notations have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. These proofs can be used to reduce the bias that is often perceived to effect the interpretation of adverse events. Others have argued that one cannot use logic formalisms to prove causes in the same way that one might prove propositions or theorems. Such mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators must use in their analysis of adverse events. This paper provides an overview of these mishap logics. It also identifies several additional classes of logic that might also be used to support mishap analysis.

  20. New curricular design in biostatistics to prepare residents for an evidence-based practice and lifelong learning education: a pilot approach.

    PubMed

    Arias, A; Peters, O A; Broyles, I L

    2017-10-01

    To develop, implement and evaluate an innovative curriculum in biostatistics in response to the need to foster critical thinking in graduate healthcare education for evidence-based practice and lifelong learning education. The curriculum was designed for first-year residents in a postgraduate endodontic programme using a six-step approach to curriculum development to provide sufficient understanding to critically evaluate biomedical publications, to design the best research strategy to address a specific problem and to analyse data by appropriate statistical test selection. Multiple learner-centred instructional methods and formative and summative assessments (written tasks, simulation exercises, portfolios and pre-post knowledge tests) were used to accomplish the learning outcomes. The analysis of the achievement of the group of students and a satisfaction survey for further feedback provided to the residents at the end of the curriculum were used for curriculum evaluation. All residents demonstrated competency at the end of the curriculum. The correct answer rate changed from 36.9% in the pre-test to 79.8% in the post-test. No common errors were detected in the rest of the assessment activities. All participants completed the questionnaire demonstrating high satisfaction for each independent category and with the overall educational programme, instruction and course in general. The curriculum was validated by the assessment of students' performance and a satisfaction survey, offering an example of a practical approach to the teaching of statistics to prepare students for a successful evidence-based endodontic practice and lifelong learning education as practicing clinicians. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  1. On the linearity of tracer bias around voids

    NASA Astrophysics Data System (ADS)

    Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro

    2017-07-01

    The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.

  2. Interventions for reducing self-stigma in people with mental illnesses: a systematic review of randomized controlled trials.

    PubMed

    Büchter, Roland Brian; Messer, Melanie

    2017-01-01

    Background: Self-stigma occurs when people with mental illnesses internalize negative stereotypes and prejudices about their condition. It can reduce help-seeking behaviour and treatment adherence. The effectiveness of interventions aimed at reducing self-stigma in people with mental illness is systematically reviewed. Results are discussed in the context of a logic model of the broader social context of mental illness stigma. Methods: Medline, Embase, PsycINFO, ERIC, and CENTRAL were searched for randomized controlled trials in November 2013. Studies were assessed with the Cochrane risk of bias tool. Results: Five trials were eligible for inclusion, four of which provided data for statistical analyses. Four studies had a high risk of bias. The quality of evidence was very low for each set of interventions and outcomes. The interventions studied included various group based anti-stigma interventions and an anti-stigma booklet. The intensity and fidelity of most interventions was high. Two studies were considered to be sufficiently homogeneous to be pooled for the outcome self-stigma. The meta-analysis did not find a statistically significant effect (SMD [95% CI] at 3 months: -0.26 [-0.64, 0.12], I 2 =0%, n=108). None of the individual studies found sustainable effects on other outcomes, including recovery, help-seeking behaviour and self-stigma. Conclusions: The effectiveness of interventions against self-stigma is uncertain. Previous studies lacked statistical power, used questionable outcome measures and had a high risk of bias. Future studies should be based on robust methods and consider practical implications regarding intervention development (relevance, implementability, and placement in routine services).

  3. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  4. 77 FR 31220 - Microloan Operating Loans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-25

    ... sufficient credit from other sources; have sufficient applicable education, on-the-job training, or farming... Administration's Lets Move initiative, offering opportunities for niche-type urban farms to market directly to... in their farming ventures. FSA has the responsibility of providing credit counseling and supervision...

  5. Assessing sufficiency of thermal riverscapes for resilient salmon and steelhead populations

    EPA Science Inventory

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific location...

  6. [How reliable is the monitoring for doping?].

    PubMed

    Hüsler, J

    1990-12-01

    The reliability of the dope control, of the chemical analysis of the urine probes in the accredited laboratories and their decisions, is discussed using probabilistic and statistical methods. Basically, we evaluated and estimated the positive predictive value which means the probability that an urine probe contains prohibited dope substances given a positive test decision. Since there are not statistical data and evidence for some important quantities in relation to the predictive value, an exact evaluation is not possible, only conservative, lower bounds can be given. We found that the predictive value is at least 90% or 95% with respect to the analysis and decision based on the A-probe only, and at least 99% with respect to both A- and B-probes. A more realistic observation, but without sufficient statistical confidence, points to the fact that the true predictive value is significantly larger than these lower estimates.

  7. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  8. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  9. Application of the Socio-Ecological Model to predict physical activity behaviour among Nigerian University students.

    PubMed

    Essiet, Inimfon Aniema; Baharom, Anisah; Shahar, Hayati Kadir; Uzochukwu, Benjamin

    2017-01-01

    Physical activity among university students is a catalyst for habitual physical activity in adulthood. Physical activity has many health benefits besides the improvement in academic performance. The present study assessed the predictors of physical activity among Nigerian university students using the Social Ecological Model (SEM). This cross-sectional study recruited first-year undergraduate students in the University of Uyo, Nigeria by multistage sampling. The International Physical Activity Questionnaire (IPAQ) short-version was used to assess physical activity in the study. Factors were categorised according to the Socio-Ecological Model which consisted of individual, social environment, physical environment and policy level. Data was analysed using the IBM SPSS statistical software, version 22. Simple and multiple logistic regression were used to determine the predictors of sufficient physical activity. A total of 342 respondents completed the study questionnaire. Majority of the respondents (93.6%) reported sufficient physical activity at 7-day recall. Multivariate analysis revealed that respondents belonging to the Ibibio ethnic group were about four times more likely to be sufficiently active compared to those who belonged to the other ethnic groups (AOR = 3.725, 95% CI = 1.383 to 10.032). Also, participants who had a normal weight were about four times more likely to be physically active compared to those who were underweight (AOR = 4.268, 95% CI = 1.323 to 13.772). This study concluded that there was sufficient physical activity levels among respondents. It is suggested that emphasis be given to implementing interventions aimed at sustaining sufficient levels of physical activity among students.

  10. Information-Based Approach to Unsupervised Machine Learning

    DTIC Science & Technology

    2013-06-19

    Leibler , R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86. Minka, T. P. (2000). Old and new matrix algebra use ...and Arabie, P. Comparing partitions. Journal of Classification, 2(1):193–218, 1985. Kullback , S. and Leibler , R. A. On information and suf- ficiency...the test input density to a lin- ear combination of class-wise input distributions under the Kullback - Leibler (KL) divergence ( Kullback

  11. Orbit-Attitude Changes of Objects in Near Earth Space Induced by Natural Charging

    DTIC Science & Technology

    2017-05-02

    depends upon Earth’s magnetosphere. Typically, magneto-sphere models can be grouped under two classes: statistical and physics -based. The Physics ...models were primarily physics -based due to unavailability of sufficient space-data, but over the last three decades, with the availability of huge...Attitude Determination and Control,” Astrophysics and Space Sci- ence Library, Vol. 73, D. Reidel Publishing Company, London, 1978 [17] Fairfield

  12. Online Learning in Higher Education: Necessary and Sufficient Conditions

    ERIC Educational Resources Information Center

    Lim, Cher Ping

    2005-01-01

    The spectacular development of information and communication technologies through the Internet has provided opportunities for students to explore the virtual world of information. In this article, the author discusses the necessary and sufficient conditions for successful online learning in educational institutions. The necessary conditions…

  13. Predictive sufficiency and the use of stored internal state

    NASA Technical Reports Server (NTRS)

    Musliner, David J.; Durfee, Edmund H.; Shin, Kang G.

    1994-01-01

    In all embedded computing systems, some delay exists between sensing and acting. By choosing an action based on sensed data, a system is essentially predicting that there will be no significant changes in the world during this delay. However, the dynamic and uncertain nature of the real world can make these predictions incorrect, and thus, a system may execute inappropriate actions. Making systems more reactive by decreasing the gap between sensing and action leaves less time for predictions to err, but still provides no principled assurance that they will be correct. Using the concept of predictive sufficiency described in this paper, a system can prove that its predictions are valid, and that it will never execute inappropriate actions. In the context of our CIRCA system, we also show how predictive sufficiency allows a system to guarantee worst-case response times to changes in its environment. Using predictive sufficiency, CIRCA is able to build real-time reactive control plans which provide a sound basis for performance guarantees that are unavailable with other reactive systems.

  14. Do sufficient vitamin D levels at the end of summer in children and adolescents provide an assurance of vitamin D sufficiency at the end of winter? A cohort study.

    PubMed

    Shakeri, Habibesadat; Pournaghi, Seyed-Javad; Hashemi, Javad; Mohammad-Zadeh, Mohammad; Akaberi, Arash

    2017-10-26

    The changes in serum 25-hydroxyvitamin D (25(OH)D) in adolescents from summer to winter and optimal serum vitamin D levels in the summer to ensure adequate vitamin D levels at the end of winter are currently unknown. This study was conducted to address this knowledge gap. The study was conducted as a cohort study. Sixty-eight participants aged 7-18 years and who had sufficient vitamin D levels at the end of the summer in 2011 were selected using stratified random sampling. Subsequently, the participants' vitamin D levels were measured at the end of the winter in 2012. A receiver operating characteristic (ROC) curve was used to determine optimal cutoff points for vitamin D at the end of the summer to predict sufficient vitamin D levels at the end of the winter. The results indicated that 89.7% of all the participants had a decrease in vitamin D levels from summer to winter: 14.7% of them were vitamin D-deficient, 36.8% had insufficient vitamin D concentrations and only 48.5% where able to maintain sufficient vitamin D. The optimal cutoff point to provide assurance of sufficient serum vitamin D at the end of the winter was 40 ng/mL at the end of the summer. Sex, age and vitamin D levels at the end of the summer were significant predictors of non-sufficient vitamin D at the end of the winter. In this age group, a dramatic reduction in vitamin D was observed over the follow-up period. Sufficient vitamin D at the end of the summer did not guarantee vitamin D sufficiency at the end of the winter. We found 40 ng/mL as an optimal cutoff point.

  15. A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.

    PubMed

    Luczak, Brian B; James, Benjamin T; Girgis, Hani Z

    2017-12-06

    Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.

  16. Model parameter learning using Kullback-Leibler divergence

    NASA Astrophysics Data System (ADS)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  17. Direct electron-pair production by high energy heavy charged particles

    NASA Technical Reports Server (NTRS)

    Takahashi, Y.; Gregory, J. C.; Hayashi, T.; Dong, B. L.

    1989-01-01

    Direct electron pain production via virtual photons by moving charged particles is a unique electro-magnetic process having a substantial dependence on energy. Most electro-magnetic processes, including transition radiation, cease to be sensitive to the incident energy above 10 TeV/AMU. Thus, it is expected, that upon establishment of cross section and detection efficiency of this process, it may provide a new energy measuring technique above 10 TeV/AMU. Three accelerator exposures of emulsion chambers designed for measurements of direct electron-pains were performed. The objectives of the investigation were to provide the fundamental cross-section data in emulsion stacks to find the best-fit theoretical model, and to provide a calibration of measurements of direct electron-pairs in emulsion chamber configurations. This paper reports the design of the emulsion chambers, accelerator experiments, microscope measurements, and related considerations for future improvements of the measurements, and for possible applications to high energy cosmic ray experiments. Also discussed are the results from scanning 56m of emulsion tracks at 1200x magnification so that scanning efficiency is optimized. Measurements of the delta-ray range spectrum were also performed for much shorter track lengths, but with sufficiently large statistics in the number of measured delta-rays.

  18. The effect of increasing the supply of skilled health providers on pregnancy and birth outcomes: evidence from the midwives service scheme in Nigeria.

    PubMed

    Okeke, Edward; Glick, Peter; Chari, Amalavoyal; Abubakar, Isa Sadeeq; Pitchforth, Emma; Exley, Josephine; Bashir, Usman; Gu, Kun; Onwujekwe, Obinna

    2016-08-23

    Limited availability of skilled health providers in developing countries is thought to be an important barrier to achieving maternal and child health-related MDG goals. Little is known, however, about the extent to which scaling-up supply of health providers will lead to improved pregnancy and birth outcomes. We study the effects of the Midwives Service Scheme (MSS), a public sector program in Nigeria that increased the supply of skilled midwives in rural communities on pregnancy and birth outcomes. We surveyed 7,104 women with a birth within the preceding five years across 12 states in Nigeria and compared changes in birth outcomes in MSS communities to changes in non-MSS communities over the same period. The main measured effect of the scheme was a 7.3-percentage point increase in antenatal care use in program clinics and a 5-percentage point increase in overall use of antenatal care, both within the first year of the program. We found no statistically significant effect of the scheme on skilled birth attendance or on maternal delivery complications. This study highlights the complexity of improving maternal and child health outcomes in developing countries, and shows that scaling up supply of midwives may not be sufficient on its own.

  19. Spectral Confusion for Cosmological Surveys of Redshifted C II Emission

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Dwek, E.; Moseley, S. H.

    2015-01-01

    Far-infrared cooling lines are ubiquitous features in the spectra of star-forming galaxies. Surveys of redshifted fine-structure lines provide a promising new tool to study structure formation and galactic evolution at redshifts including the epoch of reionization as well as the peak of star formation. Unlike neutral hydrogen surveys, where the 21 cm line is the only bright line, surveys of redshifted fine-structure lines suffer from confusion generated by line broadening, spectral overlap of different lines, and the crowding of sources with redshift. We use simulations to investigate the resulting spectral confusion and derive observing parameters to minimize these effects in pencilbeam surveys of redshifted far-IR line emission. We generate simulated spectra of the 17 brightest far-IR lines in galaxies, covering the 150-1300 µm wavelength region corresponding to redshifts 0 < z < 7, and develop a simple iterative algorithm that successfully identifies the 158 µm [C II] line and other lines. Although the [C II] line is a principal coolant for the interstellar medium, the assumption that the brightest observed lines in a given line of sight are always [C II] lines is a poor approximation to the simulated spectra once other lines are included. Blind line identification requires detection of fainter companion lines from the same host galaxies, driving survey sensitivity requirements. The observations require moderate spectral resolution 700 < R < 4000 with angular resolution between 20? and 10', sufficiently narrow to minimize confusion yet sufficiently large to include a statistically meaningful number of sources.

  20. A Communication Framework for Collaborative Defense

    DTIC Science & Technology

    2009-02-28

    been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a fraction of...perceived. We have been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a...that are well understood in the context of databases . These techniques allow users to quickly scan for the existence of a key in a database . 8 To be

  1. Measured and perceived effects of computerized scientist mentors on student learning and motivation in science

    NASA Astrophysics Data System (ADS)

    Bowman, Catherine Dodds Dunham

    Unease about declining U.S. science literacy and inquiry skills drives much innovation in science education, including the quest for authentic science experiences for students. One response is student-scientist partnerships (SSP), involving small numbers of students in scientific investigations with scientist mentors. Alternatively, science inquiry programs provide large numbers of students with opportunities to pursue their own investigations but without extensive access to experts, potentially limiting the possible cognitive and affective gains. This mixed methods study investigates whether it is possible to replicate some of SSPs' benefits on a larger scale through use of a computerized agent designed as a "virtual" scientist mentor. Middle school students (N=532) were randomly assigned to two versions of an agent (or to a control group) providing either content-only or content and interpersonal mentoring while they participated in a three-week curriculum. Results indicate that, on average, students gained in content knowledge but there was no statistically significant difference between the three conditions. In terms of motivation, students exhibited no change, on average, with no statistically significant difference between the three conditions. These data indicate that the treatment conditions neither facilitate nor inhibit student learning and motivation. Interviews with a subsample (n=70), however, suggest that students believe the agents facilitated their learning, eased the workload, provided a trusted source of information, and were enjoyable to use. Teachers reported that the agents provided alternative views of scientists and science, generated class discussion, and met the needs of high and low-achieving students. This difference between measured and perceived benefits may result from measures that were not sufficiently sensitive to capture differences. Alternatively, a more sophisticated agent might better replicate mentoring functions known to produce cognitive and affective gains. Even without established learning or motivational gains, practitioners may want to employ agents for their ability to provide reliable information, expanded perspectives on science and scientists, and a non-intimidating setting for students to ask questions. For computerized agent researchers, this study provides a first step in exploring the affordances and challenges of sustained use of agents in real school settings with the goal of improving science education.

  2. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  3. Evaluating the sufficiency of protected lands for maintaining wildlife population connectivity in the northern Rocky Mountains

    Treesearch

    Samuel A. Cushman; Erin L. Landguth; Curtis H. Flather

    2012-01-01

    Aim: The goal of this study was to evaluate the sufficiency of the network of protected lands in the U.S. northern Rocky Mountains in providing protection for habitat connectivity for 105 hypothetical organisms. A large proportion of the landscape...

  4. Automatic Classification of Medical Text: The Influence of Publication Form1

    PubMed Central

    Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.

    1988-01-01

    Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.

  5. High Precision Measurement of the Neutron Polarizabilities via Compton Scattering on Deuterium at Eγ=65 MeV

    NASA Astrophysics Data System (ADS)

    Sikora, Mark; Compton@HIGS Team

    2017-01-01

    The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at an incident photon energy of 65 MeV and discuss the sensitivity of these data to the polarizabilities.

  6. High Precision Measurement of the Neutron Polarizabilities via Compton Scattering on Deuterium at HI γS

    NASA Astrophysics Data System (ADS)

    Sikora, Mark

    2016-09-01

    The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at incident photon energies of 65 and 85 MeV and discuss the sensitivity of these data to the polarizabilities.

  7. PARAGON: A Systematic, Integrated Approach to Aerosol Observation and Modeling

    NASA Technical Reports Server (NTRS)

    Diner, David J.; Kahn, Ralph A.; Braverman, Amy J.; Davies, Roger; Martonchik, John V.; Menzies, Robert T.; Ackerman, Thomas P.; Seinfeld, John H.; Anderson, Theodore L.; Charlson, Robert J.; hide

    2004-01-01

    Aerosols are generated and transformed by myriad processes operating across many spatial and temporal scales. Evaluation of climate models and their sensitivity to changes, such as in greenhouse gas abundances, requires quantifying natural and anthropogenic aerosol forcings and accounting for other critical factors, such as cloud feedbacks. High accuracy is required to provide sufficient sensitivity to perturbations, separate anthropogenic from natural influences, and develop confidence in inputs used to support policy decisions. Although many relevant data sources exist, the aerosol research community does not currently have the means to combine these diverse inputs into an integrated data set for maximum scientific benefit. Bridging observational gaps, adapting to evolving measurements, and establishing rigorous protocols for evaluating models are necessary, while simultaneously maintaining consistent, well understood accuracies. The Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) concept represents a systematic, integrated approach to global aerosol Characterization, bringing together modern measurement and modeling techniques, geospatial statistics methodologies, and high-performance information technologies to provide the machinery necessary for achieving a comprehensive understanding of how aerosol physical, chemical, and radiative processes impact the Earth system. We outline a framework for integrating and interpreting observations and models and establishing an accurate, consistent and cohesive long-term data record.

  8. Decoding and disrupting left midfusiform gyrus activity during word reading

    PubMed Central

    Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh

    2016-01-01

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763

  9. Development and Piloting of a Food Safety Audit Tool for the Domestic Environment.

    PubMed

    Borrusso, Patricia; Quinlan, Jennifer J

    2013-12-04

    Research suggests that consumers often mishandle food in the home based on survey and observation studies. There is a need for a standardized tool for researchers to objectively evaluate the prevalence and identify the nature of food safety risks in the domestic environment. An audit tool was developed to measure compliance with recommended sanitation, refrigeration and food storage conditions in the domestic kitchen. The tool was piloted by four researchers who independently completed the inspection in 22 homes. Audit tool questions were evaluated for reliability using the κ statistic. Questions that were not sufficiently reliable (κ < 0.5) or did not provide direct evidence of risk were revised or eliminated from the final tool. Piloting the audit tool found good reliability among 18 questions, 6 questions were revised and 28 eliminated, resulting in a final 24 question tool. The audit tool was able to identify potential food safety risks, including evidence of pest infestation (27%), incorrect refrigeration temperature (73%), and lack of hot water (>43 °C, 32%). The audit tool developed here provides an objective measure for researchers to observe and record the most prevalent food safety risks in consumer's kitchens and potentially compare risks among consumers of different demographics.

  10. Increasing topical anesthetic efficacy with microneedle application.

    PubMed

    Buhsem, Ömer; Aksoy, Alper; Kececi, Yavuz; Sir, Emin; Güngör, Melike

    2016-10-01

    Since topical anesthetics alone seldom provide adequate analgesia for laser resurfacing procedures, injectable forms of anesthesia are often required. However, their application is uncomfortable for the patient. In this study, it is investigated whether microneedle application would enhance the efficacy of topical anesthetics. Forty-seven patients participated in the study. Topical anesthetic agent EMLA was applied to the whole face of the patients. Microneedle treatment was applied to one side of the face with a roller-type device. Whole-face carbon dioxide laser resurfacing therapy was carried out then. The pain that patients experienced was assessed by using visual analog scale (VAS) method. VAS scores of two sides of the face were compared by using Wilcoxon signed-rank test. The mean of VAS score of the microneedle treated side was 2.1 ± 1.1 while that of the untreated side was 5.9 ± 0.9 and this difference was statistically significant (Wilcoxon signed-rank test, the Z-value is - 5.9683 and the p-value is < 0.001). This study revealed that microneedle application, with a roller-type device, is a safe and easy procedure in providing sufficient anesthesia for facial laser resurfacing without the need for supplementary nerve blocks or injections.

  11. Decoding and disrupting left midfusiform gyrus activity during word reading.

    PubMed

    Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh

    2016-07-19

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.

  12. A simplified method for identification of human cardiac myosin heavy-chain isoforms.

    PubMed

    Piao, Shengfu; Yu, Fushun; Mihm, Michael J; Reiser, Peter J; McCarthy, Patrick M; Van Wagoner, David R; Bauer, John Anthony

    2003-02-01

    Cardiac myosin is a central participant in the cross-bridge cycling that mediates myocyte contraction and consists of multiple subunits that mediate both hydrolysis of ATP and mechanical production of contractile force Two isoforms of myosin heavy chain (MHC- alpha and MHC- beta ) are known to exist in mammalian cardiac tissue, and it is within this myosin subunit that ATPase activity resides. These isoforms differ by less than 0.2% in total molecular mass and amino acid sequence, but, strikingly, influence the rate and efficiency of energy utilization for generation of contractile force. Changes in the MHC- alpha /MHC- beta ratio has been classically viewed as an adaptation of a failing myocyte in both animal models and humans; however, their measurement has traditionally required specialized preparations and materials for sufficient resolution. Here we describe a greatly simplified method for routine assessments of myosin isoform composition in human cardiac tissues. The primary advantages of our approach include higher throughput and reduced supply costs with no apparent loss of statistical power, reproducibility or achieved results. Use of this more convenient method may provide enhanced access to an otherwise specialized technique and could provide additional opportunity for investigation of cardiac myocyte adaptive changes.

  13. Diffusion cannot govern the discharge of neurotransmitter in fast synapses.

    PubMed Central

    Khanin, R; Parnas, H; Segel, L

    1994-01-01

    In the present work we show that diffusion cannot provide the observed fast discharge of neurotransmitter from a synaptic vesicle during neurotransmitter release, mainly because it is not sufficiently rapid nor is it sufficiently temperature-dependent. Modeling the discharge from the vesicle into the cleft as a continuous point source, we have determined that discharge should occur in 50-75 microseconds, to provide the observed high concentrations of transmitter at the critical zone. Images FIGURE 5 PMID:7811953

  14. Using protein-protein interactions for refining gene networks estimated from microarray data by Bayesian networks.

    PubMed

    Nariai, N; Kim, S; Imoto, S; Miyano, S

    2004-01-01

    We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.

  15. An Integrative Account of Constraints on Cross-Situational Learning

    PubMed Central

    Yurovsky, Daniel; Frank, Michael C.

    2015-01-01

    Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052

  16. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  17. Quantum gas-liquid condensation in an attractive Bose gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koh, Shun-ichiro

    Gas-liquid condensation (GLC) in an attractive Bose gas is studied on the basis of statistical mechanics. Using some results in combinatorial mathematics, the following are derived. (1) With decreasing temperature, the Bose-statistical coherence grows in the many-body wave function, which gives rise to the divergence of the grand partition function prior to Bose-Einstein condensation. It is a quantum-mechanical analogue to the GLC in a classical gas (quantum GLC). (2) This GLC is triggered by the bosons with zero momentum. Compared with the classical GLC, an incomparably weaker attractive force creates it. For the system showing the quantum GLC, we discussmore » a cold helium 4 gas at sufficiently low pressure.« less

  18. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  19. Alternative Matching Scores to Control Type I Error of the Mantel-Haenszel Procedure for DIF in Dichotomously Scored Items Conforming to 3PL IRT and Nonparametric 4PBCB Models

    ERIC Educational Resources Information Center

    Monahan, Patrick O.; Ankenmann, Robert D.

    2010-01-01

    When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…

  20. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  1. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  2. The criterion of subscale sufficiency and its application to the relationship between static capillary pressure, saturation and interfacial areas.

    PubMed

    Kurzeja, Patrick

    2016-05-01

    Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.

  3. Methods for the synthesis of olefins and derivatives

    DOEpatents

    Burk, Mark J; Pharkya, Priti; Van Dien, Stephen J; Burgard, Anthony P; Schilling, Christophe H

    2013-06-04

    The invention provides a method of producing acrylic acid. The method includes contacting fumaric acid with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylic acid per mole of fumaric acid. Also provided is an acrylate ester. The method includes contacting fumarate diester with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylate ester per mole of fumarate diester. An integrated process for process for producing acrylic acid or acrylate ester is provided which couples bioproduction of fumaric acid with metathesis transformation. An acrylic acid and an acrylate ester production also is provided.

  4. Methods for the synthesis of olefins and derivatives

    DOEpatents

    Burk, Mark J [San Diego, CA; Pharkya, Priti [San Diego, CA; Van Dien, Stephen J [Encinitas, CA; Burgard, Anthony P [Bellefonte, PA; Schilling, Christophe H [San Diego, CA

    2011-09-27

    The invention provides a method of producing acrylic acid. The method includes contacting fumaric acid with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylic acid per mole of fumaric acid. Also provided is an acrylate ester. The method includes contacting fumarate diester with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylate ester per mole of fumarate diester. An integrated process for process for producing acrylic acid or acrylate ester is provided which couples bioproduction of fumaric acid with metathesis transformation. An acrylic acid and an acrylate ester production also is provided.

  5. Methods for synthesis of olefins and derivatives

    DOEpatents

    Burk, Mark J.; Pharkya, Priti; Van Dien, Stephen J.; Burgard, Anthony P.; Schilling, Christophe H.

    2016-06-14

    The invention provides a method of producing acrylic acid. The method includes contacting fumaric acid with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylic acid per mole of fumaric acid. Also provided is an acrylate ester. The method includes contacting fumarate diester with a sufficient amount of ethylene in the presence of a cross-metathesis transformation catalyst to produce about two moles of acrylate ester per mole of fumarate diester. An integrated process for process for producing acrylic acid or acrylate ester is provided which couples bioproduction of fumaric acid with metathesis transformation. An acrylic acid and an acrylate ester production also is provided.

  6. Combined analysis of whole human blood parameters by Raman spectroscopy and spectral-domain low-coherence interferometry

    NASA Astrophysics Data System (ADS)

    Gnyba, M.; Wróbel, M. S.; Karpienko, K.; Milewska, D.; Jedrzejewska-Szczerska, M.

    2015-07-01

    In this article the simultaneous investigation of blood parameters by complementary optical methods, Raman spectroscopy and spectral-domain low-coherence interferometry, is presented. Thus, the mutual relationship between chemical and physical properties may be investigated, because low-coherence interferometry measures optical properties of the investigated object, while Raman spectroscopy gives information about its molecular composition. A series of in-vitro measurements were carried out to assess sufficient accuracy for monitoring of blood parameters. A vast number of blood samples with various hematological parameters, collected from different donors, were measured in order to achieve a statistical significance of results and validation of the methods. Preliminary results indicate the benefits in combination of presented complementary methods and form the basis for development of a multimodal system for rapid and accurate optical determination of selected parameters in whole human blood. Future development of optical systems and multivariate calibration models are planned to extend the number of detected blood parameters and provide a robust quantitative multi-component analysis.

  7. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm

    PubMed Central

    Sakumura, Yuichi; Koyama, Yutaro; Tokutake, Hiroaki; Hida, Toyoaki; Sato, Kazuo; Itoh, Toshio; Akamatsu, Takafumi; Shin, Woosuck

    2017-01-01

    Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs) at very low concentrations (ppb level). We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls) using gas chromatography/mass spectrometry (GC/MS), and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM) algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH3CN, isoprene, 1-propanol) is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer. PMID:28165388

  8. Indoor radon and childhood leukaemia.

    PubMed

    Raaschou-Nielsen, Ole

    2008-01-01

    This paper summarises the epidemiological literature on domestic exposure to radon and risk for childhood leukaemia. The results of 12 ecological studies show a consistent pattern of higher incidence and mortality rates for childhood leukaemia in areas with higher average indoor radon concentrations. Although the results of such studies are useful to generate hypotheses, they must be interpreted with caution, as the data were aggregated and analysed for geographical areas and not for individuals. The seven available case-control studies of childhood leukaemia with measurement of radon concentrations in the residences of cases and controls gave mixed results, however, with some indication of a weak (relative risk < 2) association with acute lymphoblastic leukaemia. The epidemiological evidence to date suggests that an association between indoor exposure to radon and childhood leukaemia might exist, but is weak. More case-control studies are needed, with sufficient statistical power to detect weak associations and based on designs and methods that minimise misclassification of exposure and provide a high participation rate and low potential selection bias.

  9. Single-Molecule Imaging of an in Vitro-Evolved RNA Aptamer Reveals Homogeneous Ligand Binding Kinetics

    PubMed Central

    2009-01-01

    Many studies of RNA folding and catalysis have revealed conformational heterogeneity, metastable folding intermediates, and long-lived states with distinct catalytic activities. We have developed a single-molecule imaging approach for investigating the functional heterogeneity of in vitro-evolved RNA aptamers. Monitoring the association of fluorescently labeled ligands with individual RNA aptamer molecules has allowed us to record binding events over the course of multiple days, thus providing sufficient statistics to quantitatively define the kinetic properties at the single-molecule level. The ligand binding kinetics of the highly optimized RNA aptamer studied here displays a remarkable degree of uniformity and lack of memory. Such homogeneous behavior is quite different from the heterogeneity seen in previous single-molecule studies of naturally derived RNA and protein enzymes. The single-molecule methods we describe may be of use in analyzing the distribution of functional molecules in heterogeneous evolving populations or even in unselected samples of random sequences. PMID:19572753

  10. Composition and energy spectra of cosmic ray nuclei above 500 GeV/nucleon from the JACEE emulsion chambers

    NASA Technical Reports Server (NTRS)

    Burnett, T. H.; Dake, S.; Derrickson, J. H.; Fountain, W. F.; Fuki, M.; Gregory, J. C.; Hayashi, T.; Holynski, R.; Iwai, J.; Jones, W. V.

    1985-01-01

    The composition and energy spectra of charge groups (C - 0), (Ne - S), and (Z approximately 17) above 500 GeV/nucleon from the experiments of JACEE series balloonborne emulsion chambers are reported. Studies of cosmic ray elemental composition at higher energies provide information on propagation through interstellar space, acceleration mechanisms, and their sources. One of the present interests is the elemental composition at energies above 100 GeV/nucleon. Statistically sufficient data in this energy region can be decisive in judgment of propagation models from the ratios of SECONDARY/PRIMARY and source spectra (acceleration mechanism), as well as speculative contributions of different sources from the ratios of PRIMARY/PRIMARY. At much higher energies, i.e., around 10 to the 15th power eV, data from direct observation will give hints on the knee problem, as to whether they favor an escape effect possibly governed by magnetic rigidity above 10 to the 16th power eV.

  11. The 2-d CCD Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Privett, G. J.; Taylor, M. B.

    This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.

  12. Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm.

    PubMed

    Sakumura, Yuichi; Koyama, Yutaro; Tokutake, Hiroaki; Hida, Toyoaki; Sato, Kazuo; Itoh, Toshio; Akamatsu, Takafumi; Shin, Woosuck

    2017-02-04

    Monitoring exhaled breath is a very attractive, noninvasive screening technique for early diagnosis of diseases, especially lung cancer. However, the technique provides insufficient accuracy because the exhaled air has many crucial volatile organic compounds (VOCs) at very low concentrations (ppb level). We analyzed the breath exhaled by lung cancer patients and healthy subjects (controls) using gas chromatography/mass spectrometry (GC/MS), and performed a subsequent statistical analysis to diagnose lung cancer based on the combination of multiple lung cancer-related VOCs. We detected 68 VOCs as marker species using GC/MS analysis. We reduced the number of VOCs and used support vector machine (SVM) algorithm to classify the samples. We observed that a combination of five VOCs (CHN, methanol, CH₃CN, isoprene, 1-propanol) is sufficient for 89.0% screening accuracy, and hence, it can be used for the design and development of a desktop GC-sensor analysis system for lung cancer.

  13. Comparison between satellite and instrumental solar irradiance data at the city of Athens, Greece

    NASA Astrophysics Data System (ADS)

    Markonis, Yannis; Dimoulas, Thanos; Atalioti, Athina; Konstantinou, Charalampos; Kontini, Anna; Pipini, Magdalini-Io; Skarlatou, Eleni; Sarantopoulos, Vasilis; Tzouka, Katerina; Papalexiou, Simon; Koutsoyiannis, Demetris

    2015-04-01

    In this study, we examine and compare the statistical properties of satellite and instrumental solar irradiance data at the capital of Greece, Athens. Our aim is to determine whether satellite data are sufficient for the requirements of solar energy modelling applications. To this end we estimate the corresponding probability density functions, the auto-correlation functions and the parameters of some fitted simple stochastic models. We also investigate the effect of sample size to the variance in the temporal interpolation of daily time series. Finally, as an alternative, we examine if temperature can be used as a better predictor for the daily irradiance non-seasonal component instead of the satellite data. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  14. Towards real-time metabolic profiling of a biopsy specimen during a surgical operation by 1H high resolution magic angle spinning nuclear magnetic resonance: a case report

    PubMed Central

    2012-01-01

    Introduction Providing information on cancerous tissue samples during a surgical operation can help surgeons delineate the limits of a tumoral invasion more reliably. Here, we describe the use of metabolic profiling of a colon biopsy specimen by high resolution magic angle spinning nuclear magnetic resonance spectroscopy to evaluate tumoral invasion during a simulated surgical operation. Case presentation Biopsy specimens (n = 9) originating from the excised right colon of a 66-year-old Caucasian women with an adenocarcinoma were automatically analyzed using a previously built statistical model. Conclusions Metabolic profiling results were in full agreement with those of a histopathological analysis. The time-response of the technique is sufficiently fast for it to be used effectively during a real operation (17 min/sample). Metabolic profiling has the potential to become a method to rapidly characterize cancerous biopsies in the operation theater. PMID:22257563

  15. Measurements of the free stream fluctuations above a turbulent boundary layer

    NASA Technical Reports Server (NTRS)

    Wood, David H.; Westphal, Russell V.

    1987-01-01

    This paper investigates the velocity fluctuations in the free stream above an incompressible turbulent boundary layer developing at constant pressure. It is assumed that the fluctuations receive contributions from three statistically independent sources: (1) one-dimensional unsteadiness, (2) free stream turbulence, and (3) the potential motion induced by the turbulent boundary layer. Measurements were made in a wind tunnel with a root-mean-square level of the axial velocity fluctuations of about 0.2 percent. All three velocity components were measured using an X-wire probe. The unsteadiness was determined from the spanwise covariance of the axial velocity, measured using two single wire probes. The results show that it is possible to separate the contributions to the r.m.s. level of the velocity fluctuations, without resorting to the dubious technique of high-pass filtering. The separation could be extended to the spectral densities of the contributions, if measurements of sufficient accuracy were available. The Appendix provides a general guide for the measurement of small free stream fluctuation levels.

  16. Down the Tubes: Vetting the Apparent Water-rich Parent Body being Accreted by the White Dwarf GD 16

    NASA Astrophysics Data System (ADS)

    Melis, Carl

    2015-10-01

    How water is distributed in a planetary system critically affects the formation, evolution, and habitability of its constituent rocky bodies. White dwarf stars provide a unique method to probe the prevalence of water-rich rocky bodies outside of our Solar system and where they preferentially reside in a planetary system. However, as evidenced by the case of GD 362, some parent bodies that at first glance might appear to be water-rich can actually be quite water-scarce. At this time there are only a small number of plausibly water-rich rocky bodies that are being actively accreted by their host white dwarf star. Given such a sample size it is crucial to characterize each one in sufficient detail to remove interlopers like GD 362 that might otherwise affect future statistical analyses. In this proposal we seek to vet GD 16, a water-rich candidate yet to be observed with HST-COS that is the brightest remaining such target in the UV.

  17. Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Jun, E-mail: junsuzuki@uec.ac.jp

    The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results ofmore » this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.« less

  18. An Ultra-Wideband Cross-Correlation Radiometer for Mesoscopic Experiments

    NASA Astrophysics Data System (ADS)

    Toonen, Ryan; Haselby, Cyrus; Qin, Hua; Eriksson, Mark; Blick, Robert

    2007-03-01

    We have designed, built and tested a cross-correlation radiometer for detecting statistical order in the quantum fluctuations of mesoscopic experiments at sub-Kelvin temperatures. Our system utilizes a fully analog front-end--operating over the X- and Ku-bands (8 to 18 GHz)--for computing the cross-correlation function. Digital signal processing techniques are used to provide robustness against instrumentation drifts and offsets. The economized version of our instrument can measure, with sufficient correlation efficiency, noise signals having power levels as low as 10 fW. We show that, if desired, we can improve this performance by including cryogenic preamplifiers which boost the signal-to-noise ratio near the signal source. By adding a few extra components, we can measure both the real and imaginary parts of the cross-correlation function--improving the overall signal-to-noise ratio by a factor of sqrt[2]. We demonstrate the utility of our cross-correlator with noise power measurements from a quantum point contact.

  19. Lunar Meteoroid Impact Observations and the Flux of Kilogram-Size Meteoroids

    NASA Technical Reports Server (NTRS)

    Suggs, Rob; Cooke, Bill; Koehler, Heather; Moser, Danielle; Suggs, Ron; Swift, Wes

    2010-01-01

    Meteor showers dominate the environment in this size range and explain the evening/morning flux asymmetry of 1.5:1. With sufficient numbers of impacts, this technique can help determine the population index for some showers. Measured flux of meteoroids in the 100g to kilograms range is consistent with other observations. We have a fruitful observing program underway which has significantly increased the number of lunar impacts observed. Over 200 impacts have been recorded in about 4 years. This analysis reports on the 115 impacts taken under photometric conditions during the first 3 full years of operation. We plan to continue for the foreseeable future as follows: 1) Run detailed model to try explain the concentration near the trailing limb; 2) Build up statistics to better understand the meteor shower environment; 3) Provide support for robotic seismometers and dust missions; and 4) Deploy near-infrared and visible cameras with dichroic beamsplitter to 0.5m telescope in New Mexico.

  20. Why the Three-Point Rule Failed to Sufficiently Reduce the Number of Draws in Soccer: An Application of Prospect Theory.

    PubMed

    Riedl, Dennis; Heuer, Andreas; Strauss, Bernd

    2015-06-01

    Incentives guide human behavior by altering the level of external motivation. We apply the idea of loss aversion from prospect theory (Kahneman & Tversky, 1979) to the point reward systems in soccer and investigate the controversial impact of the three-point rule on reducing the fraction of draws in this sport. Making use of the Poisson nature of goal scoring, we compared empirical results with theoretically deduced draw ratios from 24 countries encompassing 20 seasons each (N = 118.148 matches). The rule change yielded a slight reduction in the ratio of draws, but despite adverse incentives, still 18% more matches ended drawn than expected, t(23) = 11.04, p < .001, d = 2.25, consistent with prospect theory assertions. Alternative point systems that manipulated incentives for losses yielded reductions at or below statistical expectation. This provides support for the deduced concept of how arbitrary aims, such as the reduction of draws in the world's soccer leagues, could be more effectively accomplished than currently attempted.

  1. Building a medical image processing algorithm verification database

    NASA Astrophysics Data System (ADS)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  2. Extremal optimization for Sherrington-Kirkpatrick spin glasses

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    2005-08-01

    Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.

  3. Satellite orbit and data sampling requirements

    NASA Technical Reports Server (NTRS)

    Rossow, William

    1993-01-01

    Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.

  4. What is biomedical informatics?

    PubMed Central

    Bernstam, Elmer V.; Smith, Jack W.; Johnson, Todd R.

    2009-01-01

    Biomedical informatics lacks a clear and theoretically grounded definition. Many proposed definitions focus on data, information, and knowledge, but do not provide an adequate definition of these terms. Leveraging insights from the philosophy of information, we define informatics as the science of information, where information is data plus meaning. Biomedical informatics is the science of information as applied to or studied in the context of biomedicine. Defining the object of study of informatics as data plus meaning clearly distinguishes the field from related fields, such as computer science, statistics and biomedicine, which have different objects of study. The emphasis on data plus meaning also suggests that biomedical informatics problems tend to be difficult when they deal with concepts that are hard to capture using formal, computational definitions. In other words, problems where meaning must be considered are more difficult than problems where manipulating data without regard for meaning is sufficient. Furthermore, the definition implies that informatics research, teaching, and service should focus on biomedical information as data plus meaning rather than only computer applications in biomedicine. PMID:19683067

  5. Contribution of Modis Satellite Image to Estimate the Daily Air Temperature in the Casablanca City, Morocco

    NASA Astrophysics Data System (ADS)

    Bahi, Hicham; Rhinane, Hassan; Bensalmia, Ahmed

    2016-10-01

    Air temperature is considered to be an essential variable for the study and analysis of meteorological regimes and chronics. However, the implementation of a daily monitoring of this variable is very difficult to achieve. It requires sufficient of measurements stations density, meteorological parks and favourable logistics. The present work aims to establish relationship between day and night land surface temperatures from MODIS data and the daily measurements of air temperature acquired between [2011-20112] and provided by the Department of National Meteorology [DMN] of Casablanca, Morocco. The results of the statistical analysis show significant interdependence during night observations with correlation coefficient of R2=0.921 and Root Mean Square Error RMSE=1.503 for Tmin while the physical magnitude estimated from daytime MODIS observation shows a relatively coarse error with R2=0.775 and RMSE=2.037 for Tmax. A method based on Gaussian process regression was applied to compute the spatial distribution of air temperature from MODIS throughout the city of Casablanca.

  6. Crossed-uncrossed difference (CUD) in a new light: anatomy of the negative CUD in Poffenberger's paradigm.

    PubMed

    Derakhshan, I

    2006-03-01

    Crossed Uncrossed Differentials (CUDs) have long been used as surrogate for the interhemispheric transfer time (IHTT). Evidence is presented that macular vision is the province of the major hemisphere, wherein all commands are initiated regardless of the laterality of the effectors of such commands. Using clinical and time-resolved data it is shown also that the above arrangement (i.e. neural handedness) corresponds to the subject's behavioral avowed (avowed, self-declared) handedness only in a statistical sense; with a substantial minority of humanity displaying a disparity of neural and behavioral handedness. Evidence is provided that the negative CUD in previously reported studies was a reflection of such incongruity in those subjects studied. Thus, to lateralize the command center it is sufficient to determine the reaction time of two symmetrically located effectors on the body. The side with longer reaction time is ipsilateral to the major hemisphere, with the difference of the two sides commensurate to transcallosal IHTT.

  7. The quantitative architecture of centromeric chromatin

    PubMed Central

    Bodor, Dani L; Mata, João F; Sergeev, Mikhail; David, Ana Filipa; Salimian, Kevan J; Panchenko, Tanya; Cleveland, Don W; Black, Ben E; Shah, Jagesh V; Jansen, Lars ET

    2014-01-01

    The centromere, responsible for chromosome segregation during mitosis, is epigenetically defined by CENP-A containing chromatin. The amount of centromeric CENP-A has direct implications for both the architecture and epigenetic inheritance of centromeres. Using complementary strategies, we determined that typical human centromeres contain ∼400 molecules of CENP-A, which is controlled by a mass-action mechanism. This number, despite representing only ∼4% of all centromeric nucleosomes, forms a ∼50-fold enrichment to the overall genome. In addition, although pre-assembled CENP-A is randomly segregated during cell division, this amount of CENP-A is sufficient to prevent stochastic loss of centromere function and identity. Finally, we produced a statistical map of CENP-A occupancy at a human neocentromere and identified nucleosome positions that feature CENP-A in a majority of cells. In summary, we present a quantitative view of the centromere that provides a mechanistic framework for both robust epigenetic inheritance of centromeres and the paucity of neocentromere formation. DOI: http://dx.doi.org/10.7554/eLife.02137.001 PMID:25027692

  8. Forensic Discrimination of Latent Fingerprints Using Laser-Induced Breakdown Spectroscopy (LIBS) and Chemometric Approaches.

    PubMed

    Yang, Jun-Ho; Yoh, Jack J

    2018-01-01

    A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.

  9. Student Performance on Conceptual Questions: Does Instruction Matter?

    NASA Astrophysics Data System (ADS)

    Heron, Paula

    2012-10-01

    As part of the tutorial component of introductory calculus-based physics at the University of Washington, students take weekly pretests that consist of conceptual questions. Pretests are so named because they precede each tutorial, but they are frequently administered after lecture instruction. Many variables associated with class composition and prior instruction could, in principle, affect student performance. Nonetheless, the results are often found to be ``essentially the same'' in all classes. Selected questions for which we have accumulated thousands of responses, from dozens of classes representing different conditions with respect to the textbook in use, the amount of prior instruction, etc., serve as examples. A preliminary analysis suggests that the variation in performance across all classes is essentially random. No statistically significant difference is observed between results obtained before relevant instruction begins and after it has been completed. The results provide evidence that exposure to concepts in lecture and textbook is not sufficient to ensure an improvement in performance on questions that require qualitative reasoning.

  10. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  11. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children?

    PubMed

    Pourkazemi, Maryam; Erfanparast, Leila; Sheykhgermchi, Sanaz; Ghanizadeh, Milad

    2017-01-01

    Pain control is one of the most important aspects of behavior management in children. The most common way to achieve pain control is by using local anesthetics (LA). Many studies describe that the buccal nerve innervates the buccal gingiva and mucosa of the mandible for a variable extent from the vicinity of the lower third molar to the lower canine. Regarding the importance of appropriate and complete LA in child-behavior control, in this study, we examined the frequency of buccal gingiva anesthesia of primary mandibular molars and canine after inferior alveolar nerve block injection in 4- to 6-year-old children. In this descriptive cross-sectional study, 220 4- to 6-year-old children were randomly selected and entered into the study. Inferior alveolar nerve block was injected with the same method and standards for all children, and after ensuring the success of block injection, anesthesia of buccal mucosa of primary molars and canine was examined by stick test and reaction of child using sound, eye, motor (SEM) scale. The data from the study were analyzed using descriptive statistics and statistical software Statistical Package for the Social Sciences (SPSS) version 21. The area that was the highest nonanesthetized was recorded as in the distobuccal of the second primary molars. The area of the lowest nonanesthesia was also reported in the gingiva of primary canine tooth. According to this study, in 15 to 30% of cases, after inferior alveolar nerve block injection, the primary mandibular molars' buccal mucosa is not anesthetized. How to cite this article: Pourkazemi M, Erfanparast L, Sheykhgermchi S, Ghanizadeh M. Is Inferior Alveolar Nerve Block Sufficient for Routine Dental Treatment in 4- to 6-year-old Children? Int J Clin Pediatr Dent 2017;10(4):369-372.

  12. Therapeutic whole-body hypothermia reduces mortality in severe traumatic brain injury if the cooling index is sufficiently high: meta-analyses of the effect of single cooling parameters and their integrated measure.

    PubMed

    Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras

    2018-04-21

    Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.

  13. Comparative morphometric analysis of 5 interpositional arterial autograft options for adult living donor liver transplantation.

    PubMed

    Imakuma, E S; Bordini, A L; Millan, L S; Massarollo, P C B; Caldini, E T E G

    2014-01-01

    In living donor liver transplantation, the right-sided graft presents thin and short vessels, bringing forward a more difficult anastomosis. In these cases, an interpositional arterial autograft can be used to favor the performance of the arterial anastomosis, making the procedure easier and avoiding surgical complications. We compared the inferior mesenteric artery (IMA), the splenic artery (SA), the inferior epigastric artery (IEA), the descending branch of the lateral circumflex femoral artery (LCFA), and the proper hepatic artery (PHA) as options for interpositional autograft in living donor liver transplantation. Segments of at least 3 cm of all 5 arteries were harvested from 16 fresh adult cadavers from both genders through standardized dissection. The analyzed measures were proximal and distal diameter and length. The proximal diameter of the RHA and the distal diameter of the SA, IMA, IEA and the LCFA were compared to the distal diameter of the RHA. The proximal and distal diameters of the SA, IEA and LCFA were compared to study caliber gain of each artery. All arteries except the IMA showed statistical significant difference in relation to the RHA in terms of diameter. Regarding caliber gain, the arteries demonstrated statistical significant difference. All the harvested arteries except PHA were 3 cm in length. The IMA demonstrated the best compatibility with the RHA in terms of diameter and showed sufficient length to be employed as interpositional graft. The PHA, the SA, the IEA and the LCFA presented statistically significant different diameters when compared to the RHA. Among these vessels, only the PHA did not show sufficient mean length. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Statistical analyses of the relative risk.

    PubMed Central

    Gart, J J

    1979-01-01

    Let P1 be the probability of a disease in one population and P2 be the probability of a disease in a second population. The ratio of these quantities, R = P1/P2, is termed the relative risk. We consider first the analyses of the relative risk from retrospective studies. The relation between the relative risk and the odds ratio (or cross-product ratio) is developed. The odds ratio can be considered a parameter of an exponential model possessing sufficient statistics. This permits the development of exact significance tests and confidence intervals in the conditional space. Unconditional tests and intervals are also considered briefly. The consequences of misclassification errors and ignoring matching or stratifying are also considered. The various methods are extended to combination of results over the strata. Examples of case-control studies testing the association between HL-A frequencies and cancer illustrate the techniques. The parallel analyses of prospective studies are given. If P1 and P2 are small with large samples sizes the appropriate model is a Poisson distribution. This yields a exponential model with sufficient statistics. Exact conditional tests and confidence intervals can then be developed. Here we consider the case where two populations are compared adjusting for sex differences as well as for the strata (or covariate) differences such as age. The methods are applied to two examples: (1) testing in the two sexes the ratio of relative risks of skin cancer in people living in different latitudes, and (2) testing over time the ratio of the relative risks of cancer in two cities, one of which fluoridated its drinking water and one which did not. PMID:540589

  15. Single photon laser altimeter simulator and statistical signal processing

    NASA Astrophysics Data System (ADS)

    Vacek, Michael; Prochazka, Ivan

    2013-05-01

    Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.

  16. Global, regional and national levels and trends of preterm birth rates for 1990 to 2014: protocol for development of World Health Organization estimates.

    PubMed

    Vogel, Joshua P; Chawanpaiboon, Saifon; Watananirun, Kanokwaroon; Lumbiganon, Pisake; Petzold, Max; Moller, Ann-Beth; Thinkhamrop, Jadsada; Laopaiboon, Malinee; Seuc, Armando H; Hogan, Daniel; Tunçalp, Ozge; Allanson, Emma; Betrán, Ana Pilar; Bonet, Mercedes; Oladapo, Olufemi T; Gülmezoglu, A Metin

    2016-06-17

    The official WHO estimates of preterm birth are an essential global resource for assessing the burden of preterm birth and developing public health programmes and policies. This protocol describes the methods that will be used to identify, critically appraise and analyse all eligible preterm birth data, in order to develop global, regional and national level estimates of levels and trends in preterm birth rates for the period 1990 - 2014. We will conduct a systematic review of civil registration and vital statistics (CRVS) data on preterm birth for all WHO Member States, via national Ministries of Health and Statistics Offices. For Member States with absent, limited or lower-quality CRVS data, a systematic review of surveys and/or research studies will be conducted. Modelling will be used to develop country, regional and global rates for 2014, with time trends for Member States where sufficient data are available. Member States will be invited to review the methodology and provide additional eligible data via a country consultation before final estimates are developed and disseminated. This research will be used to generate estimates on the burden of preterm birth globally for 1990 to 2014. We invite feedback on the methodology described, and call on the public health community to submit pertinent data for consideration. Registered at PROSPERO CRD42015027439 CONTACT: pretermbirth@who.int.

  17. Bringing evidence to policy to achieve health-related MDGs for all: justification and design of the EPI-4 project in China, India, Indonesia, and Vietnam.

    PubMed

    Thomsen, Sarah; Ng, Nawi; Biao, Xu; Bondjers, Göran; Kusnanto, Hari; Liem, Nguyen Tanh; Mavalankar, Dileep; Målqvist, Mats; Diwan, Vinod

    2013-03-13

    The Millennium Development Goals (MDGs) are monitored using national-level statistics, which have shown substantial improvements in many countries. These statistics may be misleading, however, and may divert resources from disadvantaged populations within the same countries that are showing progress. The purpose of this article is to set out the relevance and design of the "Evidence for Policy and Implementation project (EPI-4)". EPI-4 aims to contribute to the reduction of inequities in the achievement of health-related MDGs in China, India, Indonesia and Vietnam through the promotion of research-informed policymaking. Using a framework provided by the Commission on the Social Determinants of Health (CSDH), we compare national-level MDG targets and results, as well as their social and structural determinants, in China, India, Indonesia and Vietnam. To understand country-level MDG achievements it is useful to analyze their social and structural determinants. This analysis is not sufficient, however, to understand within-country inequities. Specialized analyses are required for this purpose, as is discussion and debate of the results with policymakers, which is the aim of the EPI-4 project. Reducing health inequities requires sophisticated analyses to identify disadvantaged populations within and between countries, and to determine evidence-based solutions that will make a difference. The EPI-4 project hopes to contribute to this goal.

  18. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2s of ECG signals.

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San

    2017-04-01

    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Three Essays in Energy Economics and Industrial Organization, with Applications to Electricity and Distribution Networks

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Dimitrios

    Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.

  20. Three Essays in Energy Economics and Industrial Organization, with Applications to Electricity and Distribution Networks

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Dimitrios

    Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.

  1. Annual Report to the Nation on the Status of Cancer, 1975–2008, Featuring Cancers Associated With Excess Weight and Lack of Sufficient Physical Activity

    PubMed Central

    Eheman, Christie; Henley, S. Jane; Ballard-Barbash, Rachel; Jacobs, Eric J.; Schymura, Maria J.; Noone, Anne-Michelle; Pan, Liping; Anderson, Robert N.; Fulton, Janet E.; Kohler, Betsy A.; Jemal, Ahmedin; Ward, Elizabeth; Plescia, Marcus; Ries, Lynn A. G.; Edwards, Brenda K.

    2015-01-01

    BACKGROUND Annual updates on cancer occurrence and trends in the United States are provided through collaboration between the American Cancer Society (ACS), the Centers for Disease Control and Prevention (CDC), the National Cancer Institute (NCI), and the North American Association of Central Cancer Registries (NAACCR). This year’s report highlights the increased cancer risk associated with excess weight (overweight or obesity) and lack of sufficient physical activity (<150 minutes of physical activity per week). METHODS Data on cancer incidence were obtained from the CDC, NCI, and NAACCR; data on cancer deaths were obtained from the CDC’s National Center for Health Statistics. Annual percent changes in incidence and death rates (age-standardized to the 2000 US population) for all cancers combined and for the leading cancers among men and among women were estimated by joinpoint analysis of long-term trends (incidence for 1992–2008 and mortality for 1975–2008) and short-term trends (1999–2008). Information was obtained from national surveys about the proportion of US children, adolescents, and adults who are overweight, obese, insufficiently physically active, or physically inactive. RESULTS Death rates from all cancers combined decreased from 1999 to 2008, continuing a decline that began in the early 1990s, among men and among women in most racial and ethnic groups. Death rates decreased from 1999 to 2008 for most cancer sites, including the 4 most common cancers (lung, colorectum, breast, and prostate). The incidence of prostate and colorectal cancers also decreased from 1999 to 2008. Lung cancer incidence declined from 1999 to 2008 among men and from 2004 to 2008 among women. Breast cancer incidence decreased from 1999 to 2004 but was stable from 2004 to 2008. Incidence increased for several cancers, including pancreas, kidney, and adenocarcinoma of the esophagus, which are associated with excess weight. CONCLUSIONS Although improvements are reported in the US cancer burden, excess weight and lack of sufficient physical activity contribute to the increased incidence of many cancers, adversely affect quality of life for cancer survivors, and may worsen prognosis for several cancers. The current report highlights the importance of efforts to promote healthy weight and sufficient physical activity in reducing the cancer burden in the United States. PMID:22460733

  2. Earthquake Predictability: Results From Aggregating Seismicity Data And Assessment Of Theoretical Individual Cases Via Synthetic Data

    NASA Astrophysics Data System (ADS)

    Adamaki, A.; Roberts, R.

    2016-12-01

    For many years an important aim in seismological studies has been forecasting the occurrence of large earthquakes. Despite some well-established statistical behavior of earthquake sequences, expressed by e.g. the Omori law for aftershock sequences and the Gutenburg-Richter distribution of event magnitudes, purely statistical approaches to short-term earthquake prediction have in general not been successful. It seems that better understanding of the processes leading to critical stress build-up prior to larger events is necessary to identify useful precursory activity, if this exists, and statistical analyses are an important tool in this context. There has been considerable debate on the usefulness or otherwise of foreshock studies for short-term earthquake prediction. We investigate generic patterns of foreshock activity using aggregated data and by studying not only strong but also moderate magnitude events. Aggregating empirical local seismicity time series prior to larger events observed in and around Greece reveals a statistically significant increasing rate of seismicity over 20 days prior to M>3.5 earthquakes. This increase cannot be explained by tempo-spatial clustering models such as ETAS, implying genuine changes in the mechanical situation just prior to larger events and thus the possible existence of useful precursory information. Because of tempo-spatial clustering, including aftershocks to foreshocks, even if such generic behavior exists it does not necessarily follow that foreshocks have the potential to provide useful precursory information for individual larger events. Using synthetic catalogs produced based on different clustering models and different presumed system sensitivities we are now investigating to what extent the apparently established generic foreshock rate acceleration may or may not imply that the foreshocks have potential in the context of routine forecasting of larger events. Preliminary results suggest that this is the case, but that it is likely that physically-based models of foreshock clustering will be a necessary, but not necessarily sufficient, basis for successful forecasting.

  3. 12 CFR 715.8 - Requirements for verification of accounts and passbooks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... selection: (ii) A sample which is representative of the population from which it was selected; (iii) An equal chance of selecting each dollar in the population; (iv) Sufficient accounts in both number and... consistent with GAAS if such methods provide for: (i) Sufficient accounts in both number and scope on which...

  4. 24 CFR 903.2 - With respect to admissions, what must a PHA do to deconcentrate poverty in its developments and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... statute, such as mixed-income or mixed-finance developments, homeownership programs, self-sufficiency... efforts to increase self-sufficiency or current residents) may include but is not limited to providing for... the PHA's strategic objectives. (2) Determination of compliance with deconcentration requirement. HUD...

  5. From Poverty to Self-Sufficiency: The Role of Postsecondary Education in Welfare Reform.

    ERIC Educational Resources Information Center

    Center for Women Policy Studies, Washington, DC.

    This report provides policymakers with information necessary to demonstrate that postsecondary education is an effective route from poverty to true self-sufficiency and prosperity for low-income women. It discusses the impact of the 1996 Temporary Assistance for Needy Families (TANF) statute and the TANF reauthorization bill, the Personal…

  6. Solid-gel precursor solutions and methods for the fabrication of polymetallicsiloxane coating films

    DOEpatents

    Sugama, Toshifumi

    1992-01-01

    Solutions and preparation methods necessary for the fabrication of metal oxide cross-linked polysiloxane coating films are disclosed. The films are useful in provide heat resistance against oxidation, wear resistance, thermal insulation, and corrosion resistance of substrates. The sol-gel precursor solution comprises a mixture of a monomeric organoalkoxysilane, a metal alkoxide M(OR).sub.n (wherein M is Ti, Zr, Ge or Al; R is CH.sub.3, C.sub.2 H.sub.5 or C.sub.3 H.sub.7 ; and n is 3 or 4), methanol, water, HCl and NaOH. The invention provides a sol-gel solution, and a method of use thereof, which can be applied and processed at low temperatures (i.e., <1000.degree. C.). The substrate can be coated by immersing it in the above mentioned solution at ambient temperature. The substrate is then withdrawn from the solution. Next, the coated substrate is heated for a time sufficient and at a temperature sufficient to yield a solid coating. The coated substrate is then heated for a time sufficient, and temperature sufficient to produce a polymetallicsiloxane coating.

  7. Solid-gel precursor solutions and methods for the fabrication of polymetallicsiloxane coating films

    DOEpatents

    Sugama, Toshifumi

    1993-01-01

    Solutions and preparation methods necessary for the fabrication of metal oxide cross-linked polysiloxane coating films are disclosed. The films are useful in provide heat resistance against oxidation, wear resistance, thermal insulation, and corrosion resistance of substrates. The sol-gel precursor solution comprises a mixture of a monomeric organoalkoxysilane, a metal alkoxide M(OR).sub.n (wherein M is Ti, Zr, Ge or Al; R is CH.sub.3, C.sub.2 H.sub.5 or C.sub.3 H.sub.7 ; and n is 3 or 4), methanol, water, HCl and NaOH. The invention provides a sol-gel solution, and a method of use thereof, which can be applied and processed at low temperatures (i.e., <1000.degree. C.). The substrate can be coated by immersing it in the above mentioned solution at ambient temperature. The substrate is then withdrawn from the solution. Next, the coated substrate is heated for a time sufficient and at a temperature sufficient to yield a solid coating. The coated substrate is then heated for a time sufficient, and temperature sufficient to produce a polymetallicsiloxane coating.

  8. Solid-gel precursor solutions and methods for the fabrication of polymetallicsiloxane coating films

    DOEpatents

    Toshifumi Sugama.

    1993-04-06

    Solutions and preparation methods necessary for the fabrication of metal oxide cross-linked polysiloxane coating films are disclosed. The films are useful in provide heat resistance against oxidation, wear resistance, thermal insulation, and corrosion resistance of substrates. The sol-gel precursor solution comprises a mixture of a monomeric organoalkoxysilane, a metal alkoxide M(OR)[sub n] (wherein M is Ti, Zr, Ge or Al; R is CH[sub 3], C[sub 2]H[sub 5] or C[sub 3]H[sub 7]; and n is 3 or 4), methanol, water, HCl and NaOH. The invention provides a sol-gel solution, and a method of use thereof, which can be applied and processed at low temperatures (i.e., < 1,000 C.). The substrate can be coated by immersing it in the above mentioned solution at ambient temperature. The substrate is then withdrawn from the solution. Next, the coated substrate is heated for a time sufficient and at a temperature sufficient to yield a solid coating. The coated substrate is then heated for a time sufficient, and temperature sufficient to produce a polymetallicsiloxane coating.

  9. [Effect of vitamin beverages on vitamin sufficiency of the workers of Pskov Hydroelectric Power-Plant].

    PubMed

    Spiricheva, T V; Vrezhesinskaia, O A; Beketova, N A; Pereverzeva, O G; Kosheleva, O V; Kharitonchik, L A; Kodentsova, V M; Iudina, A V; Spirichev, V B

    2010-01-01

    The research of influence of vitamin complexes in the form of a drink or kissel on vitamin sufficiency of working persons has been carried out. Long inclusion (6,5 months) in a diet of vitamin drinks containing about 80% from recommended daily consumption of vitamins, was accompanied by trustworthy improvement of vitamins C and B6 sufficiency and prevention of seasonal deterioration of beta-carotene status. As initially surveyed have been well provided with vitamins A and E, their blood serum level increase had not occurred.

  10. Interventions for reducing self-stigma in people with mental illnesses: a systematic review of randomized controlled trials

    PubMed Central

    Büchter, Roland Brian; Messer, Melanie

    2017-01-01

    Background: Self-stigma occurs when people with mental illnesses internalize negative stereotypes and prejudices about their condition. It can reduce help-seeking behaviour and treatment adherence. The effectiveness of interventions aimed at reducing self-stigma in people with mental illness is systematically reviewed. Results are discussed in the context of a logic model of the broader social context of mental illness stigma. Methods: Medline, Embase, PsycINFO, ERIC, and CENTRAL were searched for randomized controlled trials in November 2013. Studies were assessed with the Cochrane risk of bias tool. Results: Five trials were eligible for inclusion, four of which provided data for statistical analyses. Four studies had a high risk of bias. The quality of evidence was very low for each set of interventions and outcomes. The interventions studied included various group based anti-stigma interventions and an anti-stigma booklet. The intensity and fidelity of most interventions was high. Two studies were considered to be sufficiently homogeneous to be pooled for the outcome self-stigma. The meta-analysis did not find a statistically significant effect (SMD [95% CI] at 3 months: –0.26 [–0.64, 0.12], I2=0%, n=108). None of the individual studies found sustainable effects on other outcomes, including recovery, help-seeking behaviour and self-stigma. Conclusions: The effectiveness of interventions against self-stigma is uncertain. Previous studies lacked statistical power, used questionable outcome measures and had a high risk of bias. Future studies should be based on robust methods and consider practical implications regarding intervention development (relevance, implementability, and placement in routine services). PMID:28496396

  11. A framework for relating the structures and recovery statistics in pressure time-series surveys for dust devils

    NASA Astrophysics Data System (ADS)

    Jackson, Brian; Lorenz, Ralph; Davis, Karan

    2018-01-01

    Dust devils are likely the dominant source of dust for the martian atmosphere, but the amount and frequency of dust-lifting depend on the statistical distribution of dust devil parameters. Dust devils exhibit pressure perturbations and, if they pass near a barometric sensor, they may register as a discernible dip in a pressure time-series. Leveraging this fact, several surveys using barometric sensors on landed spacecraft have revealed dust devil structures and occurrence rates. However powerful they are, though, such surveys suffer from non-trivial biases that skew the inferred dust devil properties. For example, such surveys are most sensitive to dust devils with the widest and deepest pressure profiles, but the recovered profiles will be distorted, broader and shallow than the actual profiles. In addition, such surveys often do not provide wind speed measurements alongside the pressure time series, and so the durations of the dust devil signals in the time series cannot be directly converted to profile widths. Fortunately, simple statistical and geometric considerations can de-bias these surveys, allowing conversion of the duration of dust devil signals into physical widths, given only a distribution of likely translation velocities, and the recovery of the underlying distributions of physical parameters. In this study, we develop a scheme for de-biasing such surveys. Applying our model to an in-situ survey using data from the Phoenix lander suggests a larger dust flux and a dust devil occurrence rate about ten times larger than previously inferred. Comparing our results to dust devil track surveys suggests only about one in five low-pressure cells lifts sufficient dust to leave a visible track.

  12. Measure Projection Analysis: A Probabilistic Approach to EEG Source Comparison and Multi-Subject Inference

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Kreutz-Delgado, Kenneth; Makeig, Scott

    2013-01-01

    A crucial question for the analysis of multi-subject and/or multi-session electroencephalographic (EEG) data is how to combine information across multiple recordings from different subjects and/or sessions, each associated with its own set of source processes and scalp projections. Here we introduce a novel statistical method for characterizing the spatial consistency of EEG dynamics across a set of data records. Measure Projection Analysis (MPA) first finds voxels in a common template brain space at which a given dynamic measure is consistent across nearby source locations, then computes local-mean EEG measure values for this voxel subspace using a statistical model of source localization error and between-subject anatomical variation. Finally, clustering the mean measure voxel values in this locally consistent brain subspace finds brain spatial domains exhibiting distinguishable measure features and provides 3-D maps plus statistical significance estimates for each EEG measure of interest. Applied to sufficient high-quality data, the scalp projections of many maximally independent component (IC) processes contributing to recorded high-density EEG data closely match the projection of a single equivalent dipole located in or near brain cortex. We demonstrate the application of MPA to a multi-subject EEG study decomposed using independent component analysis (ICA), compare the results to k-means IC clustering in EEGLAB (sccn.ucsd.edu/eeglab), and use surrogate data to test MPA robustness. A Measure Projection Toolbox (MPT) plug-in for EEGLAB is available for download (sccn.ucsd.edu/wiki/MPT). Together, MPA and ICA allow use of EEG as a 3-D cortical imaging modality with near-cm scale spatial resolution. PMID:23370059

  13. Evaluation of person-level heterogeneity of treatment effects in published multiperson N-of-1 studies: systematic review and reanalysis.

    PubMed

    Raman, Gowri; Balk, Ethan M; Lai, Lana; Shi, Jennifer; Chan, Jeffrey; Lutz, Jennifer S; Dubois, Robert W; Kravitz, Richard L; Kent, David M

    2018-05-26

    Individual patients with the same condition may respond differently to similar treatments. Our aim is to summarise the reporting of person-level heterogeneity of treatment effects (HTE) in multiperson N-of-1 studies and to examine the evidence for person-level HTE through reanalysis. Systematic review and reanalysis of multiperson N-of-1 studies. Medline, Cochrane Controlled Trials, EMBASE, Web of Science and review of references through August 2017 for N-of-1 studies published in English. N-of-1 studies of pharmacological interventions with at least two subjects. Citation screening and data extractions were performed in duplicate. We performed statistical reanalysis testing for person-level HTE on all studies presenting person-level data. We identified 62 multiperson N-of-1 studies with at least two subjects. Statistical tests examining HTE were described in only 13 (21%), of which only two (3%) tested person-level HTE. Only 25 studies (40%) provided person-level data sufficient to reanalyse person-level HTE. Reanalysis using a fixed effect linear model identified statistically significant person-level HTE in 8 of the 13 studies (62%) reporting person-level treatment effects and in 8 of the 14 studies (57%) reporting person-level outcomes. Our analysis suggests that person-level HTE is common and often substantial. Reviewed studies had incomplete information on person-level treatment effects and their variation. Improved assessment and reporting of person-level treatment effects in multiperson N-of-1 studies are needed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Statistical framework for the utilization of simultaneous pupil plane and focal plane telemetry for exoplanet imaging. I. Accounting for aberrations in multiple planes.

    PubMed

    Frazin, Richard A

    2016-04-01

    A new generation of telescopes with mirror diameters of 20 m or more, called extremely large telescopes (ELTs), has the potential to provide unprecedented imaging and spectroscopy of exoplanetary systems, if the difficulties in achieving the extremely high dynamic range required to differentiate the planetary signal from the star can be overcome to a sufficient degree. Fully utilizing the potential of ELTs for exoplanet imaging will likely require simultaneous and self-consistent determination of both the planetary image and the unknown aberrations in multiple planes of the optical system, using statistical inference based on the wavefront sensor and science camera data streams. This approach promises to overcome the most important systematic errors inherent in the various schemes based on differential imaging, such as angular differential imaging and spectral differential imaging. This paper is the first in a series on this subject, in which a formalism is established for the exoplanet imaging problem, setting the stage for the statistical inference methods to follow in the future. Every effort has been made to be rigorous and complete, so that validity of approximations to be made later can be assessed. Here, the polarimetric image is expressed in terms of aberrations in the various planes of a polarizing telescope with an adaptive optics system. Further, it is shown that current methods that utilize focal plane sensing to correct the speckle field, e.g., electric field conjugation, rely on the tacit assumption that aberrations on multiple optical surfaces can be represented as aberration on a single optical surface, ultimately limiting their potential effectiveness for ground-based astronomy.

  15. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    PubMed

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Diagnostic Accuracy of Computer Tomography Angiography and Magnetic Resonance Angiography in the Stenosis Detection of Autologuous Hemodialysis Access: A Meta-Analysis

    PubMed Central

    Liu, Shiyuan

    2013-01-01

    Purpose To compare the diagnostic performances of computer tomography angiography (CTA) and magnetic resonance angiography (MRA) for detection and assessment of stenosis in patients with autologuous hemodialysis access. Materials and Methods Search of PubMed, MEDLINE, EMBASE and Cochrane Library database from January 1984 to May 2013 for studies comparing CTA or MRA with DSA or surgery for autologuous hemodialysis access. Eligible studies were in English language, aimed to detect more than 50% stenosis or occlusion of autologuous vascular access in hemodialysis patients with CTA and MRA technology and provided sufficient data about diagnosis performance. Methodological quality was assessed by the Quality Assessment of Diagnostic Studies (QUADAS) instrument. Sensitivities (SEN), specificities (SPE), positive likelihood ratio (PLR), negative likelihood values (NLR), diagnostic odds ratio (DOR) and areas under the receiver operator characteristic curve (AUC) were pooled statistically. Potential threshold effect, heterogeneity and publication bias was evaluated. The clinical utility of CTA and MRA in detection of stenosis was also investigated. Result Sixteen eligible studies were included, with a total of 500 patients. Both CTA and MRA were accurate modality (sensitivity, 96.2% and 95.4%, respectively; specificity, 97.1 and 96.1%, respectively; DOR [diagnostic odds ratio], 393.69 and 211.47, respectively) for hemodialysis vascular access. No significant difference was detected between the diagnostic performance of CTA (AUC, 0.988) and MRA (AUC, 0.982). Meta-regression analyses and subgroup analyses revealed no statistical difference. The Deek’s funnel plots suggested a publication bias. Conclusion Diagnostic performance of CTA and MRA for detecting stenosis of hemodialysis vascular access had no statistical difference. Both techniques may function as an alternative or an important complement to conventional digital subtraction angiography (DSA) and may be able to help guide medical management. PMID:24194928

  17. An application of statistical mechanics for representing equilibrium perimeter distributions of tropical convective clouds

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.

    2015-12-01

    There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.

  18. Kinetic Properties of Solar Wind Silicon and Iron Ions

    NASA Astrophysics Data System (ADS)

    Janitzek, N. P.; Berger, L.; Drews, C.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Heavy ions with atomic numbers Z>2 account for less than one percent of the solar wind ions. However, serving as test particles with differing mass and charge, they provide a unique experimental approach to major questions of solar and fundamental plasma physics such as coronal heating, the origin and acceleration of the solar wind and wave-particle interaction in magnetized plasma. Yet the low relative abundances of the heavy ions pose substantial challenges to the instrumentation measuring these species with reliable statistics and sufficient time resolution. As a consequence the numbers of independent measurements and studies are small. The Charge Time-Of-Flight (CTOF) mass spectrometer as part of the Charge, ELement and Isotope Analysis System (CELIAS) onboard the SOlar and Heliospheric Observatory (SOHO) is a linear time-of-flight mass spectrometer which was operated at Lagrangian point L1 in 1996 for a few months only, before it suffered an instrument failure. Despite its short operation time, the CTOF sensor measured solar wind heavy ions with excellent charge state separation, an unprecedented cadence of 5 minutes and very high counting statistics, exceeding similar state-of-the-art instruments by a factor of ten. In contrast to earlier CTOF studies which were based on reduced onboard post-processed data, in our current studies we use raw Pulse Height Analysis (PHA) data providing a significantly increased mass, mass-per-charge and velocity resolution. Focussing on silicon and iron ion measurements, we present an overview of our findings on (1) short time behavior of heavy ion 1D radial velocity distribution functions, (2) differential streaming between heavy ions and solar wind bulk protons, (3) kinetic temperatures of heavy ions. Finally, we compare the CTOF results with measurements of the Solar Wind Ion Composition Spectrometer (SWICS) instrument onboard the Advanced Composition Explorer (ACE).

  19. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  20. Estimation of thyroid radiation doses for the hanford thyroid disease study: results and implications for statistical power of the epidemiological analyses.

    PubMed

    Kopecky, Kenneth J; Davis, Scott; Hamilton, Thomas E; Saporito, Mark S; Onstad, Lynn E

    2004-07-01

    Residents of eastern Washington, northeastern Oregon, and western Idaho were exposed to I released into the atmosphere from operations at the Hanford Nuclear Site from 1944 through 1972, especially in the late 1940's and early 1950's. This paper describes the estimated doses to the thyroid glands of the 3,440 evaluable participants in the Hanford Thyroid Disease Study, which investigated whether thyroid morbidity was increased in people exposed to radioactive iodine from Hanford during 1944-1957. The participants were born during 1940-1946 to mothers living in Benton, Franklin, Walla Walla, Adams, Okanogan, Ferry, or Stevens Counties in Washington State. Whenever possible someone with direct knowledge of the participant's early life (preferably the participant's mother) was interviewed about the participant's individual dose-determining characteristics (residence history, sources and quantities of food, milk, and milk products consumed, production and processing techniques for home-grown food and milk products). Default information was used if no interview respondent was available. Thyroid doses were estimated using the computer program Calculation of Individual Doses from Environmental Radionuclides (CIDER) developed by the Hanford Environmental Dose Reconstruction Project. CIDER provided 100 sets of doses to represent uncertainty of the estimates. These sets were not generated independently for each participant, but reflected the effects of uncertainties in characteristics shared by participants. Estimated doses (medians of each participant's 100 realizations) ranged from 0.0029 mGy to 2823 mGy, with mean and median of 174 and 97 mGy, respectively. The distribution of estimated doses provided the Hanford Thyroid Disease Study with sufficient statistical power to test for dose-response relationships between thyroid outcomes and exposure to Hanford's I.

Top