Sample records for class-conditional probability density

  1. Probabilistic cluster labeling of imagery data

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1980-01-01

    The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.

  2. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  3. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  4. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  5. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  6. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  7. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  8. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  9. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  10. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  11. Analysing designed experiments in distance sampling

    Treesearch

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  12. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  13. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study.

    PubMed

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A; Spruijt-Metz, Donna

    2015-09-01

    Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40-0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17-0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Persons in an addiction class tend to remain in this addiction class over a one-year period.

  14. A simple probabilistic model of initiation of motion of poorly-sorted granular mixtures subjected to a turbulent flow

    NASA Astrophysics Data System (ADS)

    Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan

    2015-04-01

    Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving diameter of the particle, g is the acceleration of gravity and {{Φ }u} is a general function. For the same class, the probability density function of the instantaneous turbulent velocities {{f}u}(u|M) can be obtained from judicious laboratory or field work. From these probability densities, the empirical conditional probability of entrainment of class M is [ P(E|M)=int-∞ +∞ {P(u>{{u}p}|M) {{f}{{up}}}({{u}p}|M)d{{u}p}} ] where P(u>{{u}p}|M)=int{{up}}+∞ {{{f}u}(u|M)du}. Employing a frequentist interpretation of probability, in an actual bed reach subjected to a succession of N (turbulent) flows, the above equation states that the fraction N P(E|M) is the number of flows in which the grains of class M are entrained. The joint probability of entrainment and class M is given by the product P(E|M){{p}M}. Hence, the channel scale empirical probability of entrainment is the marginal probability [ P(E)=sumlimitsM{P(E|M){{p}M}} ] since the classes M are mutually exclusive. Fractional bedload transport rates can be obtained from the probability of entrainment through [ {{q}s_M}={{E}M}{{ℓ }s_M} ] where {{q}s_M} is the bedload discharge in volume per unit width of size fraction M, {{E}M} is the entrainment rate per unit bed area of that size fraction, calculated from the probability of entrainment as {{E}M}=P(E|M){{p}M}(1-&lambda )d/(2T) where d is a characteristic diameter of grains on the bed surface, &lambda is the bed porosity, T is the integral length scale of the longitudinal velocity at the elevation of crests of the roughness elements and {{ℓ }s_M} is the mean displacement length of class M. Fractional transport rates were computed and compared with experimental data, determined from bedload samples collected in a 12 m long 40 cm wide channel under uniform flow conditions and sediment recirculation. The median diameter of the bulk bed mixture was 3.2 mm and the geometric standard deviation was 1.7. Shields parameters ranged from 0.027 and 0.067 while the boundary Reynolds number ranged between 220 and 376. Instantaneous velocities were measured with 2-component Laser Doppler Anemometry. The results of the probabilist model exhibit a general good agreement with the laboratory data. However the probability of entrainment of the smallest size fractions is systematically underestimated. This may be caused by phenomena that is absent from the model, for instance the increased magnitude of hydrodynamic actions following the displacement of a larger sheltering grain and the fact that the collective entrainment of smaller grains following one large turbulent event is not accounted for. This work was partially funded by FEDER, program COMPETE, and by national funds through Portuguese Foundation for Science and Technology (FCT) project RECI/ECM-HID/0371/2012.

  15. Prevalence and co-occurrence of addictive behaviors among former alternative high school youth: A longitudinal follow-up study

    PubMed Central

    Sussman, Steve; Pokhrel, Pallav; Sun, Ping; Rohrbach, Louise A.; Spruijt-Metz, Donna

    2015-01-01

    Background and Aims Recent work has studied addictions using a matrix measure, which taps multiple addictions through single responses for each type. This is the first longitudinal study using a matrix measure. Methods We investigated the use of this approach among former alternative high school youth (average age = 19.8 years at baseline; longitudinal n = 538) at risk for addictions. Lifetime and last 30-day prevalence of one or more of 11 addictions reviewed in other work was the primary focus (i.e., cigarettes, alcohol, hard drugs, shopping, gambling, Internet, love, sex, eating, work, and exercise). These were examined at two time-points one year apart. Latent class and latent transition analyses (LCA and LTA) were conducted in Mplus. Results Prevalence rates were stable across the two time-points. As in the cross-sectional baseline analysis, the 2-class model (addiction class, non-addiction class) fit the data better at follow-up than models with more classes. Item-response or conditional probabilities for each addiction type did not differ between time-points. As a result, the LTA model utilized constrained the conditional probabilities to be equal across the two time-points. In the addiction class, larger conditional probabilities (i.e., 0.40−0.49) were found for love, sex, exercise, and work addictions; medium conditional probabilities (i.e., 0.17−0.27) were found for cigarette, alcohol, other drugs, eating, Internet and shopping addiction; and a small conditional probability (0.06) was found for gambling. Discussion and Conclusions Persons in an addiction class tend to remain in this addiction class over a one-year period. PMID:26551909

  16. Class dependency of fuzzy relational database using relational calculus and conditional probability

    NASA Astrophysics Data System (ADS)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  17. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  18. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  19. Twin density of aragonite in molluscan shells characterized using X-ray diffraction and transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi

    2014-07-01

    {110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.

  20. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Self-focusing quantum states

    NASA Astrophysics Data System (ADS)

    Villanueva, Anthony Allan D.

    2018-02-01

    We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.

  2. Integrating multiple fitting regression and Bayes decision for cancer diagnosis with transcriptomic data from tumor-educated blood platelets.

    PubMed

    Huang, Guangzao; Yuan, Mingshun; Chen, Moliang; Li, Lei; You, Wenjie; Li, Hanjie; Cai, James J; Ji, Guoli

    2017-10-07

    The application of machine learning in cancer diagnostics has shown great promise and is of importance in clinic settings. Here we consider applying machine learning methods to transcriptomic data derived from tumor-educated platelets (TEPs) from individuals with different types of cancer. We aim to define a reliability measure for diagnostic purposes to increase the potential for facilitating personalized treatments. To this end, we present a novel classification method called MFRB (for Multiple Fitting Regression and Bayes decision), which integrates the process of multiple fitting regression (MFR) with Bayes decision theory. MFR is first used to map multidimensional features of the transcriptomic data into a one-dimensional feature. The probability density function of each class in the mapped space is then adjusted using the Gaussian probability density function. Finally, the Bayes decision theory is used to build a probabilistic classifier with the estimated probability density functions. The output of MFRB can be used to determine which class a sample belongs to, as well as to assign a reliability measure for a given class. The classical support vector machine (SVM) and probabilistic SVM (PSVM) are used to evaluate the performance of the proposed method with simulated and real TEP datasets. Our results indicate that the proposed MFRB method achieves the best performance compared to SVM and PSVM, mainly due to its strong generalization ability for limited, imbalanced, and noisy data.

  3. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  4. Generalized fish life-cycle poplulation model and computer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexuallymore » mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.« less

  5. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  6. Evolution of column density distributions within Orion A⋆

    NASA Astrophysics Data System (ADS)

    Stutz, A. M.; Kainulainen, J.

    2015-05-01

    We compare the structure of star-forming molecular clouds in different regions of Orion A to determine how the column density probability distribution function (N-PDF) varies with environmental conditions such as the fraction of young protostars. A correlation between the N-PDF slope and Class 0 protostar fraction has been previously observed in a low-mass star-formation region (Perseus); here we test whether a similar correlation is observed in a high-mass star-forming region. We used Herschel PACS and SPIRE cold dust emission observations to derive a column density map of Orion A. We used the Herschel Orion Protostar Survey catalog to accurately identify and classify the Orion A young stellar object content, including the cold and relatively short-lived Class 0 protostars (with a lifetime of ~0.14 Myr). We divided Orion A into eight independent regions of 0.25 square degrees (13.5 pc2); in each region we fit the N-PDF distribution with a power law, and we measured the fraction of Class 0 protostars. We used a maximum-likelihood method to measure the N-PDF power-law index without binning the column density data. We find that the Class 0 fraction is higher in regions with flatter column density distributions. We tested the effects of incompleteness, extinction-driven misclassification of Class 0 sources, resolution, and adopted pixel-scales. We show that these effects cannot account for the observed trend. Our observations demonstrate an association between the slope of the power-law N-PDF and the Class 0 fractions within Orion A. Various interpretations are discussed, including timescales based on the Class 0 protostar fraction assuming a constant star-formation rate. The observed relation suggests that the N-PDF can be related to an evolutionary state of the gas. If universal, such a relation permits evaluating the evolutionary state from the N-PDF power-law index at much greater distances than those accessible with protostar counts. Appendices are available in electronic form at http://www.aanda.orgThe N(H) map as a FITS file is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/L6

  7. Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data

    NASA Technical Reports Server (NTRS)

    Lin, G. C.; Minter, T. C.

    1976-01-01

    Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.

  8. 49 CFR 173.50 - Class 1-Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... insensitive that there is very little probability of initiation or of transition from burning to detonation under normal conditions of transport. 1 The probability of transition from burning to detonation is... contain only extremely insensitive detonating substances and which demonstrate a negligible probability of...

  9. REGULATION OF GEOGRAPHIC VARIABILITY IN HAPLOID:DIPLOD RATIOS OF BIPHASIC SEAWEED LIFE CYCLES(1).

    PubMed

    da Silva Vieira, Vasco Manuel Nobre de Carvalho; Santos, Rui Orlando Pimenta

    2012-08-01

    The relative abundance of haploid and diploid individuals (H:D) in isomorphic marine algal biphasic cycles varies spatially, but only if vital rates of haploid and diploid phases vary differently with environmental conditions (i.e. conditional differentiation between phases). Vital rates of isomorphic phases in particular environments may be determined by subtle morphological or physiological differences. Herein, we test numerically how geographic variability in H:D is regulated by conditional differentiation between isomorphic life phases and the type of life strategy of populations (i.e. life cycles dominated by reproduction, survival or growth). Simulation conditions were selected using available data on H:D spatial variability in seaweeds. Conditional differentiation between ploidy phases had a small effect on the H:D variability for species with life strategies that invest either in fertility or in growth. Conversely, species with life strategies that invest mainly in survival, exhibited high variability in H:D through a conditional differentiation in stasis (the probability of staying in the same size class), breakage (the probability of changing to a smaller size class) or growth (the probability of changing to a bigger size class). These results were consistent with observed geographic variability in H:D of natural marine algae populations. © 2012 Phycological Society of America.

  10. Leukocyte Recognition Using EM-Algorithm

    NASA Astrophysics Data System (ADS)

    Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.

    This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.

  11. Multiple Chronic Conditions and Hospitalizations Among Recipients of Long-Term Services and Supports

    PubMed Central

    Van Cleave, Janet H.; Egleston, Brian L.; Abbott, Katherine M.; Hirschman, Karen B.; Rao, Aditi; Naylor, Mary D.

    2016-01-01

    Background Among older adults receiving long term-services and supports (LTSS), debilitating hospitalizations is a pervasive clinical and research problem. Multiple chronic conditions (MCC) are prevalent in LTSS recipients. However, the combination of MCC and diseases associated with hospitalizations of LTSS recipients is unclear. Objective The purpose of this analysis was to determine the association between classes of MCC in newly enrolled LTSS recipients and the number of hospitalizations over a one-year period following enrollment. Methods This report is based on secondary analysis of extant data from a longitudinal cohort study of 470 new recipients of LTSS, ages 60 years and older, receiving services in assisted living facilities, nursing homes, or through home- and community-based services. Using baseline chronic conditions reported in medical records, latent class analysis (LCA) was used to identify classes of MCC and posterior probabilities of membership in each class. Poisson regressions were used to estimate the relative ratio between posterior probabilities of class membership and number of hospitalizations during the 3 month period prior to the start of LTSS (baseline) and then every three months forward through 12 months. Results Three latent MCC-based classes named Cardiopulmonary, Cerebrovascular/Paralysis, and All Other Conditions were identified. The Cardiopulmonary class was associated with elevated numbers of hospitalization compared to the All Other Conditions class (relative ratio [RR] = 1.88, 95% CI [1.33, 2.65], p < .001). Conclusion Older LTSS recipients with a combination of MCCs that includes cardiopulmonary conditions have increased risk for hospitalization. PMID:27801713

  12. Relationship between the column density distribution and evolutionary class of molecular clouds as viewed by ATLASGAL

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Kainulainen, J.; Stutz, A.; Henning, Th.; Beuther, H.

    2015-09-01

    We present the first study of the relationship between the column density distribution of molecular clouds within nearby Galactic spiral arms and their evolutionary status as measured from their stellar content. We analyze a sample of 195 molecular clouds located at distances below 5.5 kpc, identified from the ATLASGAL 870 μm data. We define three evolutionary classes within this sample: starless clumps, star-forming clouds with associated young stellar objects, and clouds associated with H ii regions. We find that the N(H2) probability density functions (N-PDFs) of these three classes of objects are clearly different: the N-PDFs of starless clumps are narrowest and close to log-normal in shape, while star-forming clouds and H ii regions exhibit a power-law shape over a wide range of column densities and log-normal-like components only at low column densities. We use the N-PDFs to estimate the evolutionary time-scales of the three classes of objects based on a simple analytic model from literature. Finally, we show that the integral of the N-PDFs, the dense gas mass fraction, depends on the total mass of the regions as measured by ATLASGAL: more massive clouds contain greater relative amounts of dense gas across all evolutionary classes. Appendices are available in electronic form at http://www.aanda.org

  13. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  14. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  15. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  16. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  17. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  18. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  19. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  20. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  1. Dynamical Correspondence in a Generalized Quantum Theory

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    2015-05-01

    In order to figure out why quantum physics needs the complex Hilbert space, many attempts have been made to distinguish the C*-algebras and von Neumann algebras in more general classes of abstractly defined Jordan algebras (JB- and JBW-algebras). One particularly important distinguishing property was identified by Alfsen and Shultz and is the existence of a dynamical correspondence. It reproduces the dual role of the selfadjoint operators as observables and generators of dynamical groups in quantum mechanics. In the paper, this concept is extended to another class of nonassociative algebras, arising from recent studies of the quantum logics with a conditional probability calculus and particularly of those that rule out third-order interference. The conditional probability calculus is a mathematical model of the Lüders-von Neumann quantum measurement process, and third-order interference is a property of the conditional probabilities which was discovered by Sorkin (Mod Phys Lett A 9:3119-3127, 1994) and which is ruled out by quantum mechanics. It is shown then that the postulates that a dynamical correspondence exists and that the square of any algebra element is positive still characterize, in the class considered, those algebras that emerge from the selfadjoint parts of C*-algebras equipped with the Jordan product. Within this class, the two postulates thus result in ordinary quantum mechanics using the complex Hilbert space or, vice versa, a genuine generalization of quantum theory must omit at least one of them.

  2. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  3. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  4. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  5. Linear Classifier with Reject Option for the Detection of Vocal Fold Paralysis and Vocal Fold Edema

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine; Arce, Gonzalo R.

    2009-12-01

    Two distinct two-class pattern recognition problems are studied, namely, the detection of male subjects who are diagnosed with vocal fold paralysis against male subjects who are diagnosed as normal and the detection of female subjects who are suffering from vocal fold edema against female subjects who do not suffer from any voice pathology. To do so, utterances of the sustained vowel "ah" are employed from the Massachusetts Eye and Ear Infirmary database of disordered speech. Linear prediction coefficients extracted from the aforementioned utterances are used as features. The receiver operating characteristic curve of the linear classifier, that stems from the Bayes classifier when Gaussian class conditional probability density functions with equal covariance matrices are assumed, is derived. The optimal operating point of the linear classifier is specified with and without reject option. First results using utterances of the "rainbow passage" are also reported for completeness. The reject option is shown to yield statistically significant improvements in the accuracy of detecting the voice pathologies under study.

  6. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  7. Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks

    PubMed Central

    Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim

    2015-01-01

    The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network. PMID:26606143

  8. Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks.

    PubMed

    Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim

    2015-01-01

    The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network.

  9. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  10. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  11. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  12. A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome

    PubMed Central

    Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.

    2014-01-01

    This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416

  13. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  14. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  15. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  16. Implementing Inquiry-Based Learning and Examining the Effects in Junior College Probability Lessons

    ERIC Educational Resources Information Center

    Chong, Jessie Siew Yin; Chong, Maureen Siew Fang; Shahrill, Masitah; Abdullah, Nor Azura

    2017-01-01

    This study examined how Year 12 students use their inquiry skills in solving conditional probability questions by means of Inquiry-Based Learning application. The participants consisted of 66 students of similar academic abilities in Mathematics, selected from three classes, along with their respective teachers. Observational rubric and lesson…

  17. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  18. Assessing Disease Class-Specific Diagnostic Ability: A Practical Adaptive Test Approach.

    ERIC Educational Resources Information Center

    Papa, Frank J.; Schumacker, Randall E.

    Measures of the robustness of disease class-specific diagnostic concepts could play a central role in training programs designed to assure the development of diagnostic competence. In the pilot study, the authors used disease/sign-symptom conditional probability estimates, Monte Carlo procedures, and artificial intelligence (AI) tools to create…

  19. Snag longevity in relation to wildfire and postfire salvage logging

    Treesearch

    Robin E. Russell; Victoria A. Saab; Jonathan G. Dudley; Jay J. Rotella

    2006-01-01

    Snags create nesting, foraging, and roosting habitat for a variety of wildlife species. Removal of snags through postfire salvage logging reduces the densities and size classes of snags remaining after wildfire. We determined important variables associated with annual persistence rates (the probability a snag remains standing from 1 year to the next) of large conifer...

  20. In search of the Hohenberg-Kohn theorem

    NASA Astrophysics Data System (ADS)

    Lammert, Paul E.

    2018-04-01

    The Hohenberg-Kohn theorem, a cornerstone of electronic density functional theory, concerns uniqueness of external potentials yielding given ground densities of an N -body system. The problem is rigorously explored in a universe of three-dimensional Kato-class potentials, with emphasis on trade-offs between conditions on the density and conditions on the potential sufficient to ensure uniqueness. Sufficient conditions range from none on potentials coupled with everywhere strict positivity of the density to none on the density coupled with something a little weaker than local 3 N /2 -power integrability of the potential on a connected full-measure set. A second theme is localizability, that is, the possibility of uniqueness over subsets of R3 under less stringent conditions.

  1. Identifying desertification risk areas using fuzzy membership and geospatial technique - A case study, Kota District, Rajasthan

    NASA Astrophysics Data System (ADS)

    Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.

    2013-08-01

    Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.

  2. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  3. Population ecology of the mallard: II. Breeding habitat conditions, size of the breeding populations, and production indices

    USGS Publications Warehouse

    Pospahala, Richard S.; Anderson, David R.; Henny, Charles J.

    1974-01-01

    This report, the second in a series on a comprehensive analysis of mallard population data, provides information on mallard breeding habitat, the size and distribution of breeding populations, and indices to production. The information in this report is primarily the result of large-scale aerial surveys conducted during May and July, 1955-73. The history of the conflict in resource utilization between agriculturalists and wildlife conservation interests in the primary waterfowl breeding grounds is reviewed. The numbers of ponds present during the breeding season and the midsummer period and the effects of precipitation and temperature on the number of ponds present are analyzed in detail. No significant cycles in precipitation were detected and it appears that precipitation is primarily influenced by substantial seasonal and random components. Annual estimates (1955-73) of the number of mallards in surveyed and unsurveyed breeding areas provided estimates of the size and geographic distribution of breeding mallards in North America. The estimated size of the mallard breeding population in North America has ranged from a high of 14.4 million in 1958 to a low of 7.1 million in 1965. Generally, the mallard breeding population began to decline after the 1958 peak until 1962, and remained below 10 million birds until 1970. The decline and subsequent low level of the mallard population between 1959 and 1969 .generally coincided with a period of poor habitat conditions on the major breeding grounds. The density of mallards was highest in the Prairie-Parkland Area with an average of nearly 19.2 birds per square mile. The proportion of the continental mallard breeding population in the Prairie-Parkland Area ranged from 30% in 1962 to a high of 600/0 in 1956. The geographic distribution of breeding mallards throughout North America was significantly related to the number of May ponds in the Prairie-Parkland Area. Estimates of midsummer habitat conditions and indices to production from the July Production Survey were studied in detail. Several indices relating to production showed marked declines from west to east in the Prairie-Parkland Area, these are: (1) density of breeding mallards (per square mile and per May pond), (2) brood density (per square mile and per July pond), (3) average brood size (all species combined), and (4) brood survival from class II to class III. An index to late nesting and renesting efforts was highest during years when midsummer water conditions were good. Production rates of many ducks breeding in North America appear to be regulated by both density-dependent and density-independent factors. Spacing of birds in the Prairie-Parkland Area appeared to be a key factor in the density-dependent regulation of the population. The spacing mechanism, in conjunction with habitat conditions, influenced some birds to overfly the primary breeding grounds into less favorable habitats to the north and northwest where the production rate may be suppressed. The production rate of waterfowl in the Prairie Parkland Area seems to be independent of density (after emigration has taken place) because the production index appears to be a linear function of the number of breeding birds in the area. Similarly, the production rate of waterfowl in northern Saskatchewan and northern Manitoba appeared to be independent of density. Production indices in these northern areas appear to be a linear function of the size of the breeding population. Thus, the density and distribution of breeding ducks is probably regulated through a spacing mechanism that is at least partially dependent on measurable environmental factors. The result is a density-dependent process operating to ultimately effect the production and production rate of breeding ducks on a continent-wide basis. Continental production, and therefore the size of the fall population, is probably partially regulated by the number of birds that are distributed north and northwest into environments less favorable for successful reproduction. Thus, spacing of the birds in the Prairie-Parkland Area and the movement of a fraction of the birds out of the prime breeding areas may be key factors in the density-dependent regulation of the total mallard population.

  4. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2006-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  5. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2004-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  6. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  7. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  8. 36 CFR 294.21 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Has a geographic feature that aids in creating an effective fire break, such as a road or a ridge top; or (3) Is in condition class 3 as defined by HFRA. Fire hazard and risk: The fuel conditions on the landscape. Fire occurrence: The probability of wildfire ignition based on historic fire occurrence records...

  9. Persistence and extinction for a class of stochastic SIS epidemic models with nonlinear incidence rate

    NASA Astrophysics Data System (ADS)

    Teng, Zhidong; Wang, Lei

    2016-06-01

    In this paper, a class of stochastic SIS epidemic models with nonlinear incidence rate is investigated. It is shown that the extinction and persistence of the disease in probability are determined by a threshold value R˜0. That is, if R˜0 < 1 and an additional condition holds then disease dies out, and if R˜0 > 1 then disease is weak permanent with probability one. To obtain the permanence in the mean of the disease, a new quantity R̂0 is introduced, and it is proved that if R̂0 > 1 the disease is permanent in the mean with probability one. Furthermore, the numerical simulations are presented to illustrate some open problems given in Remarks 1-3 and 5 of this paper.

  10. Temporal patterns of apparent leg band retention in North American geese

    USGS Publications Warehouse

    Zimmerman, Guthrie S.; Kendall, William L.; Moser, Timothy J.; White, Gary C.; Doherty, Paul F.

    2009-01-01

    An important assumption of mark?recapture studies is that individuals retain their marks, which has not been assessed for goose reward bands. We estimated aluminum leg band retention probabilities and modeled how band retention varied with band type (standard vs. reward band), band age (1-40 months), and goose characteristics (species and size class) for Canada (Branta canadensis), cackling (Branta hutchinsii), snow (Chen caerulescens), and Ross?s (Chen rossii) geese that field coordinators double-leg banded during a North American goose reward band study (N = 40,999 individuals from 15 populations). We conditioned all models in this analysis on geese that were encountered with >1 leg band still attached (n = 5,747 dead recoveries and live recaptures). Retention probabilities for standard aluminum leg bands were high (estimate of 0.9995, SE = 0.001) and constant over 1-40 months. In contrast, apparent retention probabilities for reward bands demonstrated an interactive relationship between 5 size and species classes (small cackling, medium Canada, large Canada, snow, and Ross?s geese). In addition, apparent retention probabilities for each of the 5 classes varied quadratically with time, being lower immediately after banding and at older age classes. The differential retention probabilities among band type (reward vs. standard) that we observed suggests that 1) models estimating reporting probability should incorporate differential band loss if it is nontrivial, 2) goose managers should consider the costs and benefits of double-banding geese on an operational basis, and 3) the United States Geological Survey Bird Banding Lab should modify protocols for receiving recovery data.

  11. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  12. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  13. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  14. Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach

    NASA Astrophysics Data System (ADS)

    Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng

    2009-02-01

    Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.

  15. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  16. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  17. Class-conditional feature modeling for ignitable liquid classification with substantial substrate contribution in fire debris analysis.

    PubMed

    Lopatka, Martin; Sigman, Michael E; Sjerps, Marjan J; Williams, Mary R; Vivó-Truyols, Gabriel

    2015-07-01

    Forensic chemical analysis of fire debris addresses the question of whether ignitable liquid residue is present in a sample and, if so, what type. Evidence evaluation regarding this question is complicated by interference from pyrolysis products of the substrate materials present in a fire. A method is developed to derive a set of class-conditional features for the evaluation of such complex samples. The use of a forensic reference collection allows characterization of the variation in complex mixtures of substrate materials and ignitable liquids even when the dominant feature is not specific to an ignitable liquid. Making use of a novel method for data imputation under complex mixing conditions, a distribution is modeled for the variation between pairs of samples containing similar ignitable liquid residues. Examining the covariance of variables within the different classes allows different weights to be placed on features more important in discerning the presence of a particular ignitable liquid residue. Performance of the method is evaluated using a database of total ion spectrum (TIS) measurements of ignitable liquid and fire debris samples. These measurements include 119 nominal masses measured by GC-MS and averaged across a chromatographic profile. Ignitable liquids are labeled using the American Society for Testing and Materials (ASTM) E1618 standard class definitions. Statistical analysis is performed in the class-conditional feature space wherein new forensic traces are represented based on their likeness to known samples contained in a forensic reference collection. The demonstrated method uses forensic reference data as the basis of probabilistic statements concerning the likelihood of the obtained analytical results given the presence of ignitable liquid residue of each of the ASTM classes (including a substrate only class). When prior probabilities of these classes can be assumed, these likelihoods can be connected to class probabilities. In order to compare the performance of this method to previous work, a uniform prior was assumed, resulting in an 81% accuracy for an independent test of 129 real burn samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  19. Analyzing remote sensing geobotanical trends in Quetico Provincial Park, Ontario, Canada, using digital elevation data

    NASA Technical Reports Server (NTRS)

    Warner, Timothy A.; Campagna, David J.; Levandowski, Don W.; Cetin, Haluk; Evans, Carla S.

    1991-01-01

    A 10 x 13-km area in Quetico Provincial Park, Canada has been studied using a digital elevation model to separate different drainage classes and to examine the influence of site factors and lithology on vegetation. Landsat Thematic Mapper data have been classified into six forest classes of varying deciduous-coniferous cover through nPDF, a procedure based on probability density functions. It is shown that forests growing on mafic lithologies are enriched in deciduous species, compared to those growing on granites. Of the forest classes found on mafics, the highest coniferous component was on north facing slopes, and the highest deciduous component on south facing slopes. Granites showed no substantial variation between site classes. The digital elevation derived site data is considered to be an important tool in geobotanical investigations.

  20. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  1. Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles

    NASA Astrophysics Data System (ADS)

    Anastopoulos, C.; Hu, B. L.

    2018-02-01

    We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.

  2. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  3. Statistical distribution of the vacuum energy density in racetrack Kähler uplift models in string theory

    NASA Astrophysics Data System (ADS)

    Sumitomo, Yoske; Tye, S.-H. Henry; Wong, Sam S. C.

    2013-07-01

    We study a racetrack model in the presence of the leading α'-correction in flux compactification in Type IIB string theory, for the purpose of getting conceivable de-Sitter vacua in the large compactified volume approximation. Unlike the Kähler Uplift model studied previously, the α'-correction is more controllable for the meta-stable de-Sitter vacua in the racetrack case since the constraint on the compactified volume size is very much relaxed. We find that the vacuum energy density Λ for de-Sitter vacua approaches zero exponentially as the volume grows. We also analyze properties of the probability distribution of Λ in this class of models. As in other cases studied earlier, the probability distribution again peaks sharply at Λ = 0. We also study the Racetrack Kähler Uplift model in the Swiss-Cheese type model.

  4. Environmental Change and Disease Dynamics: Effects of Intensive Forest Management on Puumala Hantavirus Infection in Boreal Bank Vole Populations

    PubMed Central

    Voutilainen, Liina; Savola, Sakeri; Kallio, Eva Riikka; Laakkonen, Juha; Vaheri, Antti; Vapalahti, Olli; Henttonen, Heikki

    2012-01-01

    Intensive management of Fennoscandian forests has led to a mosaic of woodlands in different stages of maturity. The main rodent host of the zoonotic Puumala hantavirus (PUUV) is the bank vole (Myodes glareolus), a species that can be found in all woodlands and especially mature forests. We investigated the influence of forest age structure on PUUV infection dynamics in bank voles. Over four years, we trapped small mammals twice a year in a forest network of different succession stages in Northern Finland. Our study sites represented four forest age classes from young (4 to 30 years) to mature (over 100 years) forests. We show that PUUV-infected bank voles occurred commonly in all forest age classes, but peaked in mature forests. The probability of an individual bank vole to be PUUV infected was positively related to concurrent host population density. However, when population density was controlled for, a relatively higher infection rate was observed in voles trapped in younger forests. Furthermore, we found evidence of a “dilution effect” in that the infection probability was negatively associated with the simultaneous density of other small mammals during the breeding season. Our results suggest that younger forests created by intensive management can reduce hantaviral load in the environment, but PUUV is common in woodlands of all ages. As such, the Fennoscandian forest landscape represents a significant reservoir and source of hantaviral infection in humans. PMID:22745755

  5. Physiological condition of autumn-banded mallards and its relationship to hunting vulnerability

    USGS Publications Warehouse

    Hepp, G.R.; Blohm, R.J.; Reynolds, R.E.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    An important topic of waterfowl ecology concerns the relationship between the physiological condition of ducks during the nonbreeding season and fitness, i.e., survival and future reproductive success. We investigated this subject using direct band recovery records of mallards (Anas platyrhynchos) banded in autumn (1 Oct-15 Dec) 1981-83 in the Mississippi Alluvial Valley (MAV) [USA]. A condition index, weight (g)/wing length (mm), was calculated for each duck, and we tested whether condition of mallards at time of banding was related to their probability of recovery during the hunting season. In 3 years, 5,610 mallards were banded and there were 234 direct recoveries. Three binary regression model was used to test the relationship between recovery probability and condition. Likelihood-ratio tests were conducted to determine the most suitable model. For mallards banded in autumn there was a negative relationship between physical condition and the probability of recovery. Mallards in poor condition at the time of banding had a greater probability of being recovered during the hunting season. In general, this was true for all ages and sex classes; however, the strongest relationship occurred for adult males.

  6. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  7. Amphibian and reptile road-kills on tertiary roads in relation to landscape structure: using a citizen science approach with open-access land cover data.

    PubMed

    Heigl, Florian; Horvath, Kathrin; Laaha, Gregor; Zaller, Johann G

    2017-06-26

    Amphibians and reptiles are among the most endangered vertebrate species worldwide. However, little is known how they are affected by road-kills on tertiary roads and whether the surrounding landscape structure can explain road-kill patterns. The aim of our study was to examine the applicability of open-access remote sensing data for a large-scale citizen science approach to describe spatial patterns of road-killed amphibians and reptiles on tertiary roads. Using a citizen science app we monitored road-kills of amphibians and reptiles along 97.5 km of tertiary roads covering agricultural, municipal and interurban roads as well as cycling paths in eastern Austria over two seasons. Surrounding landscape was assessed using open access land cover classes for the region (Coordination of Information on the Environment, CORINE). Hotspot analysis was performed using kernel density estimation (KDE+). Relations between land cover classes and amphibian and reptile road-kills were analysed with conditional probabilities and general linear models (GLM). We also estimated the potential cost-efficiency of a large scale citizen science monitoring project. We recorded 180 amphibian and 72 reptile road-kills comprising eight species mainly occurring on agricultural roads. KDE+ analyses revealed a significant clustering of road-killed amphibians and reptiles, which is an important information for authorities aiming to mitigate road-kills. Overall, hotspots of amphibian and reptile road-kills were next to the land cover classes arable land, suburban areas and vineyards. Conditional probabilities and GLMs identified road-kills especially next to preferred habitats of green toad, common toad and grass snake, the most often found road-killed species. A citizen science approach appeared to be more cost-efficient than monitoring by professional researchers only when more than 400 km of road are monitored. Our findings showed that freely available remote sensing data in combination with a citizen science approach would be a cost-efficient method aiming to identify and monitor road-kill hotspots of amphibians and reptiles on a larger scale.

  8. A note on the problem of choosing a model of the universe. II

    NASA Astrophysics Data System (ADS)

    Skalsky, Vladimir

    1989-05-01

    The value of the mean mass density (rho) of the universe is examined. It is shown that there is a difference between the present terrestrial conditions and the initial conditions of the universe expansion and that, for the sphere of the physical boundary conditions represented by Planck's values (when the present evolution phase of the universe was probably decided), there are serious limitations for the value of rho. It is postulated on the basis of these limiting conditions that some cause may exist for which the condition corresponding to the critical mass density of the universe was realized.

  9. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  10. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  11. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1995-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  12. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1993-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  13. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.

    PubMed

    Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J

    2006-09-01

    We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.

  14. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    PubMed

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  15. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  16. Concurrent effects of age class and food distribution on immigration success and population dynamics in a small mammal.

    PubMed

    Rémy, Alice; Le Galliard, Jean-François; Odden, Morten; Andreassen, Harry P

    2014-07-01

    During the settlement stage of dispersal, the outcome of conflicts between residents and immigrants should depend on the social organization of resident populations as well as on individual traits of immigrants, such as their age class, body mass and/or behaviour. We have previously shown that spatial distribution of food influences the social organization of female bank voles (Myodes glareolus). Here, we aimed to determine the relative impact of food distribution and immigrant age class on the success and demographic consequences of female bank vole immigration. We manipulated the spatial distribution of food within populations having either clumped or dispersed food. After a pre-experimental period, we released either adult immigrants or juvenile immigrants, for which we scored sociability and aggressiveness prior to introduction. We found that immigrant females survived less well and moved more between populations than resident females, which suggest settlement costs. However, settled juvenile immigrants had a higher probability to reproduce than field-born juveniles. Food distribution had little effects on the settlement success of immigrant females. Survival and settlement probabilities of immigrants were influenced by adult female density in opposite ways for adult and juvenile immigrants, suggesting a strong adult-adult competition. Moreover, females of higher body mass at release had a lower probability to survive, and the breeding probability of settled immigrants increased with their aggressiveness and decreased with their sociability. Prior to the introduction of immigrants, resident females were more aggregated in the clumped food treatment than in the dispersed food treatment, but immigration reversed this relationship. In addition, differences in growth trajectories were seen during the breeding season, with populations reaching higher densities when adult immigrants were introduced in a plot with dispersed food, or when juvenile immigrants were introduced in a plot with clumped food. These results indicate the relative importance of intrinsic and extrinsic factors on immigration success and demographic consequences of dispersal and are of relevance to conservation actions, such as reinforcement of small populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  17. Quantum mechanical probability current as electromagnetic 4-current from topological EM fields

    NASA Astrophysics Data System (ADS)

    van der Mark, Martin B.

    2015-09-01

    Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.

  18. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  19. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  20. Monitoring the Restored Mangrove Condition at Perancak Estuary, Jembrana, Bali, Indonesia from 2001 to 2015

    NASA Astrophysics Data System (ADS)

    Ruslisan, R.; Kamal, M.; Sidik, F.

    2018-02-01

    Mangrove is unique vegetation that lives in tidal areas around the tropical and subtropical coasts. It has important physical, biological, and chemical roles for balancing the ecosystem, as well as serving as carbon pool. Therefore, monitoring the mangrove condition is very important step prior to any management and conservation actions in this area. This study aims to map and monitor the condition of restored mangroves in Perancak Estuary, Jembrana, Bali, Indonesia from 2001 to 2015. We used IKONOS-2, WorldView-2 and WorldView-3 image data to map the extent and canopy cover density of mangroves using visual delineation and semi-empirical modelling through Enhanced Vegetation Index (EVI) as a proxy. The results show that there was a significant increase in mangrove extent from 78.08 hectares in 2001 to 122.54 hectares in 2015. In term of mangrove canopy density, the percentage of high and very-high canopy density classes has increased from 32% in 2001 to 57% in 2015. On the other hand, there were slight changes in low and medium canopy density classes during the observation period. Overall, the result figures from both area extent and canopy density indicates the successful implementation of mangrove restoration effort in Perancak Estuary during the last 14 years.

  1. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  2. A strategic assessment of crown fire hazard in Montana: potential effectiveness and costs of hazard reduction treatments.

    Treesearch

    Carl E. Fiedler; Charles E. Keegan; Christopher W. Woodall; Todd A. Morgan

    2004-01-01

    Estimates of crown fire hazard are presented for existing forest conditions in Montana by density class, structural class, forest type, and landownership. Three hazard reduction treatments were evaluated for their effectiveness in treating historically fire-adapted forests (ponderosa pine (Pinus ponderosa Dougl. ex Laws.), Douglas-fir (...

  3. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  4. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  5. Large margin nearest neighbor classifiers.

    PubMed

    Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing

    2005-07-01

    The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.

  6. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  7. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  8. A statistical model for radar images of agricultural scenes

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  9. Generalized Quantum Theory of Bianchi IX Cosmologies

    NASA Astrophysics Data System (ADS)

    Craig, David; Hartle, James

    2003-04-01

    We apply sum-over-histories generalized quantum theory to the closed homogeneous minisuperspace Bianchi IX cosmological model. We sketch how the probabilities in decoherent sets of alternative, coarse-grained histories of this model universe are calculated. We consider in particular, the probabilities for classical evolution in a suitable coarse-graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not, illustrating the prediction that these universes will evolve in an approximately classical manner with a probability near unity.

  10. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  11. A Perron-Frobenius type of theorem for quantum operations

    NASA Astrophysics Data System (ADS)

    Lagro, Matthew

    Quantum random walks are a generalization of classical Markovian random walks to a quantum mechanical or quantum computing setting. Quantum walks have promising applications but are complicated by quantum decoherence. We prove that the long-time limiting behavior of the class of quantum operations which are the convex combination of norm one operators is governed by the eigenvectors with norm one eigenvalues which are shared by the operators. This class includes all operations formed by a coherent operation with positive probability of orthogonal measurement at each step. We also prove that any operation that has range contained in a low enough dimension subspace of the space of density operators has limiting behavior isomorphic to an associated Markov chain. A particular class of such operations are coherent operations followed by an orthogonal measurement. Applications of the convergence theorems to quantum walks are given.

  12. HMM for hyperspectral spectrum representation and classification with endmember entropy vectors

    NASA Astrophysics Data System (ADS)

    Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.

    2015-10-01

    The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

  13. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  14. Local short-term variability in solar irradiance

    NASA Astrophysics Data System (ADS)

    Lohmann, Gerald M.; Monahan, Adam H.; Heinemann, Detlev

    2016-05-01

    Characterizing spatiotemporal irradiance variability is important for the successful grid integration of increasing numbers of photovoltaic (PV) power systems. Using 1 Hz data recorded by as many as 99 pyranometers during the HD(CP)2 Observational Prototype Experiment (HOPE), we analyze field variability of clear-sky index k* (i.e., irradiance normalized to clear-sky conditions) and sub-minute k* increments (i.e., changes over specified intervals of time) for distances between tens of meters and about 10 km. By means of a simple classification scheme based on k* statistics, we identify overcast, clear, and mixed sky conditions, and demonstrate that the last of these is the most potentially problematic in terms of short-term PV power fluctuations. Under mixed conditions, the probability of relatively strong k* increments of ±0.5 is approximately twice as high compared to increment statistics computed without conditioning by sky type. Additionally, spatial autocorrelation structures of k* increment fields differ considerably between sky types. While the profiles for overcast and clear skies mostly resemble the predictions of a simple model published by , this is not the case for mixed conditions. As a proxy for the smoothing effects of distributed PV, we finally show that spatial averaging mitigates variability in k* less effectively than variability in k* increments, for a spatial sensor density of 2 km-2.

  15. Semiclassical electron transport at the edge of a two-dimensional topological insulator: Interplay of protected and unprotected modes

    NASA Astrophysics Data System (ADS)

    Khalaf, E.; Skvortsov, M. A.; Ostrovsky, P. M.

    2016-03-01

    We study electron transport at the edge of a generic disordered two-dimensional topological insulator, where some channels are topologically protected from backscattering. Assuming the total number of channels is large, we consider the edge as a quasi-one-dimensional quantum wire and describe it in terms of a nonlinear sigma model with a topological term. Neglecting localization effects, we calculate the average distribution function of transmission probabilities as a function of the sample length. We mainly focus on the two experimentally relevant cases: a junction between two quantum Hall (QH) states with different filling factors (unitary class) and a relatively thick quantum well exhibiting quantum spin Hall (QSH) effect (symplectic class). In a QH sample, the presence of topologically protected modes leads to a strong suppression of diffusion in the other channels already at scales much shorter than the localization length. On the semiclassical level, this is accompanied by the formation of a gap in the spectrum of transmission probabilities close to unit transmission, thereby suppressing shot noise and conductance fluctuations. In the case of a QSH system, there is at most one topologically protected edge channel leading to weaker transport effects. In order to describe `topological' suppression of nearly perfect transparencies, we develop an exact mapping of the semiclassical limit of the one-dimensional sigma model onto a zero-dimensional sigma model of a different symmetry class, allowing us to identify the distribution of transmission probabilities with the average spectral density of a certain random-matrix ensemble. We extend our results to other symmetry classes with topologically protected edges in two dimensions.

  16. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  17. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  18. Multicategory Composite Least Squares Classifiers

    PubMed Central

    Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul

    2010-01-01

    Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128

  19. Thermodynamics of the general diffusion process: Equilibrium supercurrent and nonequilibrium driven circulation with dissipation

    NASA Astrophysics Data System (ADS)

    Qian, H.

    2015-07-01

    Unbalanced probability circulation, which yields cyclic motions in phase space, is the defining characteristics of a stationary diffusion process without detailed balance. In over-damped soft matter systems, such behavior is a hallmark of the presence of a sustained external driving force accompanied with dissipations. In an under-damped and strongly correlated system, however, cyclic motions are often the consequences of a conservative dynamics. In the present paper, we give a novel interpretation of a class of diffusion processes with stationary circulation in terms of a Maxwell-Boltzmann equilibrium in which cyclic motions are on the level set of stationary probability density function thus non-dissipative, e.g., a supercurrent. This implies an orthogonality between stationary circulation J ss ( x) and the gradient of stationary probability density f ss ( x) > 0. A sufficient and necessary condition for the orthogonality is a decomposition of the drift b( x) = j( x) + D( x)∇φ( x) where ∇ṡ j( x) = 0 and j( x) ṡ∇φ( x) = 0. Stationary processes with such Maxwell-Boltzmann equilibrium has an underlying conservative dynamics , and a first integral ϕ( x) ≡ -ln f ss (x) = const, akin to a Hamiltonian system. At all time, an instantaneous free energy balance equation exists for a given diffusion system; and an extended energy conservation law among an entire family of diffusion processes with different parameter α can be established via a Helmholtz theorem. For the general diffusion process without the orthogonality, a nonequilibrium cycle emerges, which consists of external driven φ-ascending steps and spontaneous φ-descending movements, alternated with iso-φ motions. The theory presented here provides a rich mathematical narrative for complex mesoscopic dynamics, with contradistinction to an earlier one [H. Qian et al., J. Stat. Phys. 107, 1129 (2002)]. This article is supplemented with comments by H. Ouerdane and a final reply by the author.

  20. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  1. Single-molecule stochastic times in a reversible bimolecular reaction

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Valleriani, Angelo

    2012-08-01

    In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.

  2. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  3. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  4. Simulation of precipitation by weather pattern and frontal analysis

    NASA Astrophysics Data System (ADS)

    Wilby, Robert

    1995-12-01

    Daily rainfall from two sites in central and southern England was stratified according to the presence or absence of weather fronts and then cross-tabulated with the prevailing Lamb Weather Type (LWT). A semi-Markov chain model was developed for simulating daily sequences of LWTs from matrices of transition probabilities between weather types for the British Isles 1970-1990. Daily and annual rainfall distributions were then simulated from the prevailing LWTs using historic conditional probabilities for precipitation occurrence and frontal frequencies. When compared with a conventional rainfall generator the frontal model produced improved estimates of the overall size distribution of daily rainfall amounts and in particular the incidence of low-frequency high-magnitude totals. Further research is required to establish the contribution of individual frontal sub-classes to daily rainfall totals and of long-term fluctuations in frontal frequencies to conditional probabilities.

  5. Egg production of turbot, Scophthalmus maximus, in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Nissling, Anders; Florin, Ann-Britt; Thorsen, Anders; Bergström, Ulf

    2013-11-01

    In the brackish water Baltic Sea turbot spawn at ~ 6-9 psu along the coast and on offshore banks in ICES SD 24-29, with salinity influencing the reproductive success. The potential fecundity (the stock of vitellogenic oocytes in the pre-spawning ovary), egg size (diameter and dry weight of artificially fertilized 1-day-old eggs) and gonad dry weight were assessed for fish sampled in SD 25 and SD 28. Multiple regression analysis identified somatic weight, or total length in combination with Fulton's condition factor, as main predictors of fecundity and gonad dry weight with stage of maturity (oocyte packing density or leading cohort) as an additional predictor. For egg size, somatic weight was identified as main predictor while otolith weight (proxy for age) was an additional predictor. Univariate analysis using GLM revealed significantly higher fecundity and gonad dry weight for turbot from SD 28 (3378-3474 oocytes/g somatic weight) compared to those from SD 25 (2343 oocytes/g somatic weight), with no difference in egg size (1.05 ± 0.03 mm diameter and 46.8 ± 6.5 μg dry weight; mean ± sd). The difference in egg production matched egg survival probabilities in relation to salinity conditions suggesting selection for higher fecundity as a consequence of poorer reproductive success at lower salinities. This supports the hypothesis of higher size-specific fecundity towards the limit of the distribution of a species as an adaptation to harsher environmental conditions and lower offspring survival probabilities. Within SD 28 comparisons were made between two major fishing areas targeting spawning aggregations and a marine protected area without fishing. The outcome was inconclusive and is discussed with respect to potential fishery induced effects, effects of the salinity gradient, effects of specific year-classes, and effects of maturation status of sampled fish.

  6. Pressure-strain-rate events in homogeneous turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Brasseur, James G.; Lee, Moon J.

    1988-01-01

    A detailed study of the intercomponent energy transfer processes by the pressure-strain-rate in homogeneous turbulent shear flow is presented. Probability density functions (pdf's) and contour plots of the rapid and slow pressure-strain-rate show that the energy transfer processes are extremely peaky, with high-magnitude events dominating low-magnitude fluctuations, as reflected by very high flatness factors of the pressure-strain-rate. A concept of the energy transfer class was applied to investigate details of the direction as well as magnitude of the energy transfer processes. In incompressible flow, six disjoint energy transfer classes exist. Examination of contours in instantaneous fields, pdf's and weighted pdf's of the pressure-strain-rate indicates that in the low magnitude regions all six classes play an important role, but in the high magnitude regions four classes of transfer processes, dominate. The contribution to the average slow pressure-strain-rate from the high magnitude fluctuations is only 50 percent or less. The relative significance of high and low magnitude transfer events is discussed.

  7. Bisphosphonate Treatment for Children With Disabling Conditions

    PubMed Central

    Boyce, Alison M.; Tosi, Laura L.; Paul, Scott M.

    2014-01-01

    Fractures are a frequent source of morbidity in children with disabling conditions. The assessment of bone density in this population is challenging, because densitometry is influenced by dynamic forces affecting the growing skeleton and may be further confounded by positioning difficulties and surgical hardware. First-line treatment for pediatric osteoporosis involves conservative measures, including optimizing the management of underlying conditions, maintaining appropriate calcium and vitamin D intake, encouraging weight-bearing physical activity, and monitoring measurements of bone mineral density. Bisphosphonates are a class of medications that increase bone mineral density by inhibiting bone resorption. Although bisphosphonates are commonly prescribed for treatment of adult osteoporosis, their use in pediatric patients is controversial because of the lack of long-term safety and efficacy data. PMID:24368091

  8. Using a remote sensing/GIS model to predict southwestern Willow Flycatcher breeding habitat along the Rio Grande, New Mexico

    USGS Publications Warehouse

    Hatten, James R.; Sogge, Mark K.

    2007-01-01

    Introduction The Southwestern Willow Flycatcher (Empidonax traillii extimus; hereafter SWFL) is a federally endangered bird (USFWS 1995) that breeds in riparian areas in portions of New Mexico, Arizona, southwestern Colorado, extreme southern Utah and Nevada, and southern California (USFWS 2002). Across this range, it uses a variety of plant species as nesting/breeding habitat, but in all cases prefers sites with dense vegetation, high canopy, and proximity to surface water or saturated soils (Sogge and Marshall 2000). As of 2005, the known rangewide breeding population of SWFLs was roughly 1,214 territories, with approximately 393 territories distributed among 36 sites in New Mexico (Durst et al. 2006), primarily along the Rio Grande. One of the key challenges facing the management and conservation of the Southwestern Willow Flycatcher is that riparian areas are dynamic, with individual habitat patches subject to cycles of creation, growth, and loss due to drought, flooding, fire, and other disturbances. Former breeding patches can lose suitability, and new habitat can develop within a matter of only a few years, especially in reservoir drawdown zones. Therefore, measuring and predicting flycatcher habitat - either to discover areas that might support SWFLs, or to identify areas that may develop into appropriate habitat - requires knowledge of recent/current habitat conditions and an understanding of the factors that determine flycatcher use of riparian breeding sites. In the past, much of the determination of whether a riparian site is likely to support breeding flycatchers has been based on qualitative criteria (for example, 'dense vegetation' or 'large patches'). These determinations often require on-the-ground field evaluations by local or regional SWFL experts. While this has proven valuable in locating many of the currently known breeding sites, it is difficult or impossible to apply this approach effectively over large geographic areas (for example, the middle Rio Grande). The SWFL Recovery Plan (USFWS 2002) recognizes the importance of developing new approaches to habitat identification, and recommends the development of drainage-scale, quantitative habitat models. In particular, the plan suggests using models based on remote sensing and Geographic Information System (GIS) technology that can capture the relatively dynamic habitat changes that occur in southwestern riparian systems. In 1999, Arizona Game and Fish Department (AGFD) developed a GIS-based model (Hatten and Paradzick 2003) to identify SWFL breeding habitat from Landsat Thematic Mapper imagery and 30-m resolution digital elevation models (DEMs). The model was developed with presence/absence survey data acquired along the San Pedro and Gila rivers, and from the Salt River and Tonto Creek inlets to Roosevelt Lake in southern Arizona (collectively called the project area). The GIS-based model used a logistic regression equation to divide riparian vegetation into 5 probability classes based upon characteristics of riparian vegetation and floodplain size. This model was tested by predicting SWFL breeding habitat at Alamo Lake, Arizona, located 200 km from the project area (Hatten and Paradzick 2003). The GIS-based model performed as expected by identifying riparian areas with the highest SWFL nest densities, located in the higher probability classes. In 2002, AGFD applied the GIS-based model throughout Arizona, for riparian areas below 1,524 m (5,000 ft) elevation and within 1.6 km of perennial or intermittent waters (Dockens et al. 2004). Overall model accuracy (using probability classes 1-5, with class 5 having the greatest probability of nesting activity) for predicting the location of 2001 nest sites was 96.5 percent; accuracy decreased when fewer probability classes were defined as suitable. Map accuracy, determined from errors of commission, increased in higher probability classes in a fashion similar to errors of omission. Map accuracy, li

  9. Public Education: Special Problems in Collective Negotiations--An Overview.

    ERIC Educational Resources Information Center

    Oberer, Walter E.

    Because of the advent of collective negotiations, public education will never again be completely in control of local school boards. Collective negotiations will probably improve the quality of education to the extent that quality (higher salaries, smaller classes, better working conditions, etc.) coincides with the self-interest of teachers. The…

  10. Economic Observations on the Decision to Attend Law School

    ERIC Educational Resources Information Center

    Ahart, Alan M.

    1975-01-01

    On the premise that the expected benefits of a legal education can be measured in dollar terms, the author develops a formula for determining whether or not to matriculate based on expected earnings, educational costs, and probability of employment (graduation, class rank, passing bar exam, and supply/demand conditions). (JT)

  11. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  12. Obsessive–compulsive disorder: subclassification based on co-morbidity

    PubMed Central

    Nestadt, G.; Di, C. Z.; Riddle, M. A.; Grados, M. A.; Greenberg, B. D.; Fyer, A. J.; McCracken, J. T.; Rauch, S. L.; Murphy, D. L.; Rasmussen, S. A.; Cullen, B.; Pinto, A.; Knowles, J. A.; Piacentini, J.; Pauls, D. L.; Bienvenu, O. J.; Wang, Y.; Liang, K. Y.; Samuels, J. F.; Roche, K. Bandeen

    2011-01-01

    Background Obsessive–compulsive disorder (OCD) is probably an etiologically heterogeneous condition. Many patients manifest other psychiatric syndromes. This study investigated the relationship between OCD and co-morbid conditions to identify subtypes. Method Seven hundred and six individuals with OCD were assessed in the OCD Collaborative Genetics Study (OCGS). Multi-level latent class analysis was conducted based on the presence of eight co-morbid psychiatric conditions [generalized anxiety disorder (GAD), major depression, panic disorder (PD), separation anxiety disorder (SAD), tics, mania, somatization disorders (Som) and grooming disorders (GrD)]. The relationship of the derived classes to specific clinical characteristics was investigated. Results Two and three classes of OCD syndromes emerge from the analyses. The two-class solution describes lesser and greater co-morbidity classes and the more descriptive three-class solution is characterized by: (1) an OCD simplex class, in which major depressive disorder (MDD) is the most frequent additional disorder; (2) an OCD co-morbid tic-related class, in which tics are prominent and affective syndromes are considerably rarer; and (3) an OCD co-morbid affective-related class in which PD and affective syndromes are highly represented. The OCD co-morbid tic-related class is predominantly male and characterized by high conscientiousness. The OCD co-morbid affective-related class is predominantly female, has a young age at onset, obsessive–compulsive personality disorder (OCPD) features, high scores on the ‘taboo’ factor of OCD symptoms, and low conscientiousness. Conclusions OCD can be classified into three classes based on co-morbidity. Membership within a class is differentially associated with other clinical characteristics. These classes, if replicated, should have important implications for research and clinical endeavors. PMID:19046474

  13. Adjustment capacity of maritime pine cambial activity in drought-prone environments.

    PubMed

    Vieira, Joana; Campelo, Filipe; Rossi, Sergio; Carvalho, Ana; Freitas, Helena; Nabais, Cristina

    2015-01-01

    Intra-annual density fluctuations (IADFs) are anatomical features formed in response to changes in the environmental conditions within the growing season. These anatomical features are commonly observed in Mediterranean pines, being more frequent in younger and wider tree rings. However, the process behind IADF formation is still unknown. Weekly monitoring of cambial activity and wood formation would fill this void. Although studies describing cambial activity and wood formation have become frequent, this knowledge is still fragmentary in the Mediterranean region. Here we present data from the monitoring of cambial activity and wood formation in two diameter classes of maritime pine (Pinus pinaster Ait.), over two years, in order to test: (i) whether the differences in stem diameter in an even-aged stand were due to timings and/or rates of xylogenesis; (ii) if IADFs were more common in large trees; and (iii) if their formation is triggered by cambial resumption after the summer drought. Larger trees showed higher rates of cell production and longer growing seasons, due to an earlier start and later end of xylogenesis. When a drier winter occurs, larger trees were more affected, probably limiting xylogenesis in the summer months. In both diameter classes a latewood IADF was formed in 2012 in response to late-September precipitation, confirming that the timing of the precipitation event after the summer drought is crucial in determining the resumption of cambial activity and whether or not an IADF is formed. It was the first time that the formation of a latewood IADF was monitored at a weekly time scale in maritime pine. The capacity of maritime pine to adjust cambial activity to the current environmental conditions represents a valuable strategy under the future climate change conditions.

  14. Status and trends of the rainbow trout population in the Lees Ferry reach of the Colorado River downstream from Glen Canyon Dam, Arizona, 1991–2009

    USGS Publications Warehouse

    Makinster, Andrew S.; Persons, William R.; Avery, Luke A.

    2011-01-01

    The Lees Ferry reach of the Colorado River, a 25-kilometer segment of river located immediately downstream from Glen Canyon Dam, has contained a nonnative rainbow trout (Oncorhynchus mykiss) sport fishery since it was first stocked in 1964. The fishery has evolved over time in response to changes in dam operations and fish management. Long-term monitoring of the rainbow trout population downstream of Glen Canyon Dam is an essential component of the Glen Canyon Dam Adaptive Management Program. A standardized sampling design was implemented in 1991 and has changed several times in response to independent, external scientific-review recommendations and budget constraints. Population metrics (catch per unit effort, proportional stock density, and relative condition) were estimated from 1991 to 2009 by combining data collected at fixed sampling sites during this time period and at random sampling sites from 2002 to 2009. The validity of combining population metrics for data collected at fixed and random sites was confirmed by a one-way analysis of variance by fish-length class size. Analysis of the rainbow trout population metrics from 1991 to 2009 showed that the abundance of rainbow trout increased from 1991 to 1997, following implementation of a more steady flow regime, but declined from about 2000 to 2007. Abundance in 2008 and 2009 was high compared to previous years, which was likely the result of increased early survival caused by improved habitat conditions following the 2008 high-flow experiment at Glen Canyon Dam. Proportional stock density declined between 1991 and 2006, reflecting increased natural reproduction and large numbers of small fish in samples. Since 2001, the proportional stock density has been relatively stable. Relative condition varied with size class of rainbow trout but has been relatively stable since 1991 for fish smaller than 152 millimeters (mm), except for a substantial decrease in 2009. Relative condition was more variable for larger size classes, and substantial decreases were observed for the 152-304-mm size class in 2009 and 305-405-mm size class in 2008 that persisted into 2009.

  15. Breeding birds in managed forests on public conservation lands in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Twedt, Daniel J.; Wilson, R. Randy

    2017-01-01

    Managers of public conservation lands in the Mississippi Alluvial Valley have implemented forest management strategies to improve bottomland hardwood habitat for target wildlife species. Through implementation of various silvicultural practices, forest managers have sought to attain forest structural conditions (e.g., canopy cover, basal area, etc.) within values postulated to benefit wildlife. We evaluated data from point count surveys of breeding birds on 180 silviculturally treated stands (1049 counts) that ranged from 1 to 20 years post-treatment and 134 control stands (676 counts) that had not been harvested for >20 years. Birds detected during 10-min counts were recorded within four distance classes and three time intervals. Avian diversity was greater on treated stands than on unharvested stands. Of 42 commonly detected species, six species including Prothonotary Warbler (Prothonotaria citrea) and Acadian Flycatcher (Empidonax virescens) were indicative of control stands. Similarly, six species including Indigo Bunting (Passerina cyanea) and Yellow-breasted Chat (Icteria virens) were indicative of treated stands. Using a removal model to assess probability of detection, we evaluated occupancy of bottomland forests at two spatial scales (stands and points within occupied stands). Wildlife-forestry treatment improved predictive models of species occupancy for 18 species. We found years post treatment (range = 1–20), total basal area, and overstory canopy were important species-specific predictors of occupancy, whereas variability in basal area was not. In addition, we used a removal model to estimate species-specific probability of availability for detection, and a distance model to estimate effective detection radius. We used these two estimated parameters to derive species densities and 95% confidence intervals for treated and unharvested stands. Avian densities differed between treated and control stands for 16 species, but only Common Yellowthroat (Geothlypis trichas) and Yellow-breasted Chat had greater densities on treated stands.

  16. Water in star-forming regions with Herschel (WISH). V. The physical conditions in low-mass protostellar outflows revealed by multi-transition water observations

    NASA Astrophysics Data System (ADS)

    Mottram, J. C.; Kristensen, L. E.; van Dishoeck, E. F.; Bruderer, S.; San José-García, I.; Karska, A.; Visser, R.; Santangelo, G.; Benz, A. O.; Bergin, E. A.; Caselli, P.; Herpin, F.; Hogerheijde, M. R.; Johnstone, D.; van Kempen, T. A.; Liseau, R.; Nisini, B.; Tafalla, M.; van der Tak, F. F. S.; Wyrowski, F.

    2014-12-01

    Context. Outflows are an important part of the star formation process as both the result of ongoing active accretion and one of the main sources of mechanical feedback on small scales. Water is the ideal tracer of these effects because it is present in high abundance for the conditions expected in various parts of the protostar, particularly the outflow. Aims: We constrain and quantify the physical conditions probed by water in the outflow-jet system for Class 0 and I sources. Methods: We present velocity-resolved Herschel HIFI spectra of multiple water-transitions observed towards 29 nearby Class 0/I protostars as part of the WISH guaranteed time key programme. The lines are decomposed into different Gaussian components, with each component related to one of three parts of the protostellar system; quiescent envelope, cavity shock and spot shocks in the jet and at the base of the outflow. We then use non-LTE radex models to constrain the excitation conditions present in the two outflow-related components. Results: Water emission at the source position is optically thick but effectively thin, with line ratios that do not vary with velocity, in contrast to CO. The physical conditions of the cavity and spot shocks are similar, with post-shock H2 densities of order 105 - 108 cm-3 and H2O column densities of order 1016 - 1018 cm-2. H2O emission originates in compact emitting regions: for the spot shocks these correspond to point sources with radii of order 10-200 AU, while for the cavity shocks these come from a thin layer along the outflow cavity wall with thickness of order 1-30 AU. Conclusions: Water emission at the source position traces two distinct kinematic components in the outflow; J shocks at the base of the outflow or in the jet, and C shocks in a thin layer in the cavity wall. The similarity of the physical conditions is in contrast to off-source determinations which show similar densities but lower column densities and larger filling factors. We propose that this is due to the differences in shock properties and geometry between these positions. Class I sources have similar excitation conditions to Class 0 sources, but generally smaller line-widths and emitting region sizes. We suggest that it is the velocity of the wind driving the outflow, rather than the decrease in envelope density or mass, that is the cause of the decrease in H2O intensity between Class 0 and I sources. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Appendices are available in electronic form at http://www.aanda.orgReduced spectra are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/572/A21

  17. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  18. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  19. Phonotactic Probability Effects in Children Who Stutter

    PubMed Central

    Anderson, Julie D.; Byrd, Courtney T.

    2008-01-01

    Purpose The purpose of this study was to examine the influence of phonotactic probability, the frequency of different sound segments and segment sequences, on the overall fluency with which words are produced by preschool children who stutter (CWS), as well as to determine whether it has an effect on the type of stuttered disfluency produced. Method A 500+ word language sample was obtained from 19 CWS. Each stuttered word was randomly paired with a fluently produced word that closely matched it in grammatical class, word length, familiarity, word and neighborhood frequency, and neighborhood density. Phonotactic probability values were obtained for the stuttered and fluent words from an online database. Results Phonotactic probability did not have a significant influence on the overall susceptibility of words to stuttering, but it did impact the type of stuttered disfluency produced. In specific, single-syllable word repetitions were significantly lower in phonotactic probability than fluently produced words, as well as part-word repetitions and sound prolongations. Conclusions In general, the differential impact of phonotactic probability on the type of stuttering-like disfluency produced by young CWS provides some support for the notion that different disfluency types may originate in the disruption of different levels of processing. PMID:18658056

  20. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  1. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.

  2. The influence of surface properties on the plasma dynamics in radio-frequency driven oxygen plasmas: Measurements and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur; Niemi, Kari; O'Connell, Deborah

    2013-12-09

    Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.

  3. Area change reporting using the desktop FIADB

    Treesearch

    Patrick D. Miles; Mark H. Hansen

    2012-01-01

    The estimation of area change between two FIA inventories is complicated by the "mapping" of subplots. Subplots can be subdivided or mapped into forest and nonforest conditions, and forest conditions can be further mapped based on distinct changes in reserved status, owner group, forest type, stand-size class, regeneration status, and stand density. The...

  4. On the Asymmetric Zero-Range in the Rarefaction Fan

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia

    2014-02-01

    We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.

  5. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  6. Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State

    USGS Publications Warehouse

    Frans, Lonna M.

    2008-01-01

    Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.

  7. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  8. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  9. Landslide susceptibility map: from research to application

    NASA Astrophysics Data System (ADS)

    Fiorucci, Federica; Reichenbach, Paola; Ardizzone, Francesca; Rossi, Mauro; Felicioni, Giulia; Antonini, Guendalina

    2014-05-01

    Susceptibility map is an important and essential tool in environmental planning, to evaluate landslide hazard and risk and for a correct and responsible management of the territory. Landslide susceptibility is the likelihood of a landslide occurring in an area on the basis of local terrain conditions. Can be expressed as the probability that any given region will be affected by landslides, i.e. an estimate of "where" landslides are likely to occur. In this work we present two examples of landslide susceptibility map prepared for the Umbria Region and for the Perugia Municipality. These two maps were realized following official request from the Regional and Municipal government to the Research Institute for the Hydrogeological Protection (CNR-IRPI). The susceptibility map prepared for the Umbria Region represents the development of previous agreements focused to prepare: i) a landslide inventory map that was included in the Urban Territorial Planning (PUT) and ii) a series of maps for the Regional Plan for Multi-risk Prevention. The activities carried out for the Umbria Region were focused to define and apply methods and techniques for landslide susceptibility zonation. Susceptibility maps were prepared exploiting a multivariate statistical model (linear discriminant analysis) for the five Civil Protection Alert Zones defined in the regional territory. The five resulting maps were tested and validated using the spatial distribution of recent landslide events that occurred in the region. The susceptibility map for the Perugia Municipality was prepared to be integrated as one of the cartographic product in the Municipal development plan (PRG - Piano Regolatore Generale) as required by the existing legislation. At strategic level, one of the main objectives of the PRG, is to establish a framework of knowledge and legal aspects for the management of geo-hydrological risk. At national level most of the susceptibility maps prepared for the PRG, were and still are obtained qualitatively classifying the territory according to slope classes. For the Perugia Municipality the susceptibility map was obtained combining results of statistical multivariate models and landslide density map. In particular, in the first phase a susceptibility zonation was prepared using different single and combined probability statistical multivariate techniques. The zonation was then combined and compared with the landslide density map in order to reclassify the false negative (portion of the territory classified by the model as stable affected by slope failures). The semi-quantitative resulting map was classified in five susceptibility classes. For each class a set of technical regulation was established to manage the territory.

  10. Synthetic CT for MRI-based liver stereotactic body radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Bredfeldt, Jeremy S.; Liu, Lianli; Feng, Mary; Cao, Yue; Balter, James M.

    2017-04-01

    A technique for generating MRI-derived synthetic CT volumes (MRCTs) is demonstrated in support of adaptive liver stereotactic body radiation therapy (SBRT). Under IRB approval, 16 subjects with hepatocellular carcinoma were scanned using a single MR pulse sequence (T1 Dixon). Air-containing voxels were identified by intensity thresholding on T1-weighted, water and fat images. The envelope of the anterior vertebral bodies was segmented from the fat image and fuzzy-C-means (FCM) was used to classify each non-air voxel as mid-density, lower-density, bone, or marrow in the abdomen, with only bone and marrow classified within the vertebral body envelope. MRCT volumes were created by integrating the product of the FCM class probability with its assigned class density for each voxel. MRCTs were deformably aligned with corresponding planning CTs and 2-ARC-SBRT-VMAT plans were optimized on MRCTs. Fluence was copied onto the CT density grids, dose recalculated, and compared. The liver, vertebral bodies, kidneys, spleen and cord had median Hounsfield unit differences of less than 60. Median target dose metrics were all within 0.1 Gy with maximum differences less than 0.5 Gy. OAR dose differences were similarly small (median: 0.03 Gy, std:0.26 Gy). Results demonstrate that MRCTs derived from a single abdominal imaging sequence are promising for use in SBRT dose calculation.

  11. The use of sensory perception indicators for improving the characterization and modelling of total petroleum hydrocarbon (TPH) grade in soils.

    PubMed

    Roxo, Sónia; de Almeida, José António; Matias, Filipa Vieira; Mata-Lima, Herlander; Barbosa, Sofia

    2016-03-01

    This paper proposes a multistep approach for creating a 3D stochastic model of total petroleum hydrocarbon (TPH) grade in potentially polluted soils of a deactivated oil storage site by using chemical analysis results as primary or hard data and classes of sensory perception variables as secondary or soft data. First, the statistical relationship between the sensory perception variables (e.g. colour, odour and oil-water reaction) and TPH grade is analysed, after which the sensory perception variable exhibiting the highest correlation is selected (oil-water reaction in this case study). The probabilities of cells belonging to classes of oil-water reaction are then estimated for the entire soil volume using indicator kriging. Next, local histograms of TPH grade for each grid cell are computed, combining the probabilities of belonging to a specific sensory perception indicator class and conditional to the simulated values of TPH grade. Finally, simulated images of TPH grade are generated by using the P-field simulation algorithm, utilising the local histograms of TPH grade for each grid cell. The set of simulated TPH values allows several calculations to be performed, such as average values, local uncertainties and the probability of the TPH grade of the soil exceeding a specific threshold value.

  12. Open Quantum Random Walks on the Half-Line: The Karlin-McGregor Formula, Path Counting and Foster's Theorem

    NASA Astrophysics Data System (ADS)

    Jacq, Thomas S.; Lardizabal, Carlos F.

    2017-11-01

    In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.

  13. TRPM7 Is Required for Normal Synapse Density, Learning, and Memory at Different Developmental Stages.

    PubMed

    Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat

    2018-06-19

    The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Population density shapes patterns of survival and reproduction in Eleutheria dichotoma (Hydrozoa: Anthoathecata).

    PubMed

    Dańko, Aleksandra; Schaible, Ralf; Pijanowska, Joanna; Dańko, Maciej J

    2018-01-01

    Budding hydromedusae have high reproductive rates due to asexual reproduction and can occur in high population densities along the coasts, specifically in tidal pools. In laboratory experiments, we investigated the effects of population density on the survival and reproductive strategies of a single clone of Eleutheria dichotoma . We found that sexual reproduction occurs with the highest rate at medium population densities. Increased sexual reproduction was associated with lower budding (asexual reproduction) and survival probability. Sexual reproduction results in the production of motile larvae that can, in contrast to medusae, seek to escape unfavorable conditions by actively looking for better environments. The successful settlement of a larva results in starting the polyp stage, which is probably more resistant to environmental conditions. This is the first study that has examined the life-history strategies of the budding hydromedusa E. dichotoma by conducting a long-term experiment with a relatively large sample size that allowed for the examination of age-specific mortality and reproductive rates. We found that most sexual and asexual reproduction occurred at the beginning of life following a very rapid process of maturation. The parametric models fitted to the mortality data showed that population density was associated with an increase in the rate of aging, an increase in the level of late-life mortality plateau, and a decrease in the hidden heterogeneity in individual mortality rates. The effects of population density on life-history traits are discussed in the context of resource allocation and the r/K-strategies' continuum concept.

  15. Hepatitis disease detection using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Maseleno, Andino; Hidayati, Rohmah Zahroh

    2017-02-01

    This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.

  16. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    NASA Technical Reports Server (NTRS)

    Schwartz, H.-J.

    1976-01-01

    The modeling process of a complex system, based on the calculation and optimization of the system parameters, is complicated in that some parameters can be expressed only as probability distributions. In the present paper, a Monte Carlo technique was used to determine the daily range requirements of an electric road vehicle in the United States from probability distributions of trip lengths, frequencies, and average annual mileage data. The analysis shows that a daily range of 82 miles meets to 95% of the car-owner requirements at all times with the exception of long vacation trips. Further, it is shown that the requirement of a daily range of 82 miles can be met by a (intermediate-level) battery technology characterized by an energy density of 30 to 50 Watt-hours per pound. Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. These results imply that long-term research goals for battery systems should be focused on lower cost and longer service life, rather than on higher energy densities

  17. Vegetative composition in forested areas following application of desired forest condition treatments

    Treesearch

    Trent A. Danley; Andrew W. Ezell; Emily B. Schultz; John D. Hodges

    2015-01-01

    Desired forest conditions, or DFCs, are recently created parameters which strive to create diverse stands of hardwoods of various species and age classes, along with varying densities and canopy gaps, through the use of uneven-aged silvicultural methods and repeated stand entries. Little research has been conducted to examine residual stand composition and hardwood...

  18. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  19. Probability theory for 3-layer remote sensing in ideal gas law environment.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2013-08-26

    We extend the probability model for 3-layer radiative transfer [Opt. Express 20, 10004 (2012)] to ideal gas conditions where a correlation exists between transmission and temperature of each of the 3 layers. The effect on the probability density function for the at-sensor radiances is surprisingly small, and thus the added complexity of addressing the correlation can be avoided. The small overall effect is due to (a) small perturbations by the correlation on variance population parameters and (b) cancellation of perturbation terms that appear with opposite signs in the model moment expressions.

  20. Probability Density Functions of the Solar Wind Driver of the Magnetopshere-Ionosphere System

    NASA Astrophysics Data System (ADS)

    Horton, W.; Mays, M. L.

    2007-12-01

    The solar-wind driven magnetosphere-ionosphere system is a complex dynamical system in that it exhibits (1) sensitivity to initial conditions; (2) multiple space-time scales; (3) bifurcation sequences with hysteresis in transitions between attractors; and (4) noncompositionality. This system is modeled by WINDMI--a network of eight coupled ordinary differential equations which describe the transfer of power from the solar wind through the geomagnetic tail, the ionosphere, and ring current in the system. The model captures both storm activity from the plasma ring current energy, which yields a model Dst index result, and substorm activity from the region 1 field aligned current, yielding model AL and AU results. The input to the model is the solar wind driving voltage calculated from ACE solar wind parameter data, which has a regular coherent component and broad-band turbulent component. Cross correlation functions of the input-output data time series are computed and the conditional probability density function for the occurrence of substorms given earlier IMF conditions are derived. The model shows a high probability of substorms for solar activity that contains a coherent, rotating IMF with magnetic cloud features. For a theoretical model of the imprint of solar convection on the solar wind we have used the Lorenz attractor (Horton et al., PoP, 1999, doi:10.10631.873683) as a solar wind driver. The work is supported by NSF grant ATM-0638480.

  1. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  2. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  3. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.

  4. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  5. Boundary Conditions for Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Rosenhaus, V.; Bruzón, M. S.; Gandarias, M. L.

    2016-12-01

    Regular soliton equations (KdV, sine-Gordon, NLS) are known to possess infinite sets of local conservation laws. Some other classes of nonlinear PDE possess infinite-dimensional symmetries parametrized by arbitrary functions of independent or dependent variables; among them are Zabolotskaya-Khokhlov, Kadomtsev-Petviashvili, Davey-Stewartson equations and Born-Infeld equation. Boundary conditions were shown to play an important role for the existence of local conservation laws associated with infinite-dimensional symmetries. In this paper, we analyze boundary conditions for the infinite conserved densities of regular soliton equations: KdV, potential KdV, Sine-Gordon equation, and nonlinear Schrödinger equation, and compare them with boundary conditions for the conserved densities obtained from infinite-dimensional symmetries with arbitrary functions of independent and dependent variables.

  6. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  7. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  8. Broadcasting but not receiving: density dependence considerations for SETI signals

    NASA Astrophysics Data System (ADS)

    Smith, Reginald D.

    2009-04-01

    This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.

  9. Multiple murder and criminal careers: a latent class analysis of multiple homicide offenders.

    PubMed

    Vaughn, Michael G; DeLisi, Matt; Beaver, Kevin M; Howard, Matthew O

    2009-01-10

    To construct an empirically rigorous typology of multiple homicide offenders (MHOs). The current study conducted latent class analysis of the official records of 160 MHOs sampled from eight states to evaluate their criminal careers. A 3-class solution best fit the data (-2LL=-1123.61, Bayesian Information Criterion (BIC)=2648.15, df=81, L(2)=1179.77). Class 1 (n=64, class assignment probability=.999) was the low-offending group marked by little criminal record and delayed arrest onset. Class 2 (n=51, class assignment probability=.957) was the severe group that represents the most violent and habitual criminals. Class 3 (n=45, class assignment probability=.959) was the moderate group whose offending careers were similar to Class 2. A sustained criminal career with involvement in versatile forms of crime was observed for two of three classes of MHOs. Linkages to extant typologies and recommendations for additional research that incorporates clinical constructs are proffered.

  10. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  11. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  12. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  13. Robust Bayesian decision theory applied to optimal dosage.

    PubMed

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  14. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  15. New Concepts in the Evaluation of Biodegradation/Persistence of Chemical Substances Using a Microbial Inoculum

    PubMed Central

    Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han

    2011-01-01

    The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143

  16. Assessing the link between coastal urbanization and the quality of nekton habitat in mangrove tidal tributaries

    USGS Publications Warehouse

    Krebs, Justin M.; Bell, Susan S.; McIvor, Carole C.

    2014-01-01

    To assess the potential influence of coastal development on habitat quality for estuarine nekton, we characterized body condition and reproduction for common nekton from tidal tributaries classified as undeveloped, industrial, urban or man-made (i.e., mosquito-control ditches). We then evaluated these metrics of nekton performance, along with several abundance-based metrics and community structure from a companion paper (Krebs et al. 2013) to determine which metrics best reflected variation in land-use and in-stream habitat among tributaries. Body condition was not significantly different among undeveloped, industrial, and man-made tidal tributaries for six of nine taxa; however, three of those taxa were in significantly better condition in urban compared to undeveloped tributaries. Palaemonetes shrimp were the only taxon in significantly poorer condition in urban tributaries. For Poecilia latipinna, there was no difference in body condition (length–weight) between undeveloped and urban tributaries, but energetic condition was significantly better in urban tributaries. Reproductive output was reduced for both P. latipinna (i.e., fecundity) and grass shrimp (i.e., very low densities, few ovigerous females) in urban tributaries; however a tradeoff between fecundity and offspring size confounded meaningful interpretation of reproduction among land-use classes for P. latipinna. Reproductive allotment by P. latipinna did not differ significantly among land-use classes. Canonical correspondence analysis differentiated urban and non-urban tributaries based on greater impervious surface, less natural mangrove shoreline, higher frequency of hypoxia and lower, more variable salinities in urban tributaries. These characteristics explained 36 % of the variation in nekton performance, including high densities of poeciliid fishes, greater energetic condition of sailfin mollies, and low densities of several common nekton and economically important taxa from urban tributaries. While variation among tributaries in our study can be largely explained by impervious surface beyond the shorelines of the tributary, variation in nekton metrics among non-urban tributaries was better explained by habitat factors within the tributary and along the shorelines. Our results support the paradigm that urban development in coastal areas has the potential to alter habitat quality in small tidal tributaries as reflected by variation in nekton performance among tributaries from representative land-use classes.

  17. Sexual segregation in North American elk: the role of density dependence

    PubMed Central

    Stewart, Kelley M; Walsh, Danielle R; Kie, John G; Dick, Brian L; Bowyer, R Terry

    2015-01-01

    We investigated how density-dependent processes and subsequent variation in nutritional condition of individuals influenced both timing and duration of sexual segregation and selection of resources. During 1999–2001, we experimentally created two population densities of North American elk (Cervus elaphus), a high-density population at 20 elk/km2, and a low-density population at 4 elk/km2 to test hypotheses relative to timing and duration of sexual segregation and variation in selection of resources. We used multi-response permutation procedures to investigate patterns of sexual segregation, and resource selection functions to document differences in selection of resources by individuals in high- and low-density populations during sexual segregation and aggregation. The duration of sexual segregation was 2 months longer in the high-density population and likely was influenced by individuals in poorer nutritional condition, which corresponded with later conception and parturition, than at low density. Males and females in the high-density population overlapped in selection of resources to a greater extent than in the low-density population, probably resulting from density-dependent effects of increased intraspecific competition and lower availability of resources. PMID:25691992

  18. Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A

    NASA Astrophysics Data System (ADS)

    Turban, Loïc; Fortin, Jean-Yves

    2018-04-01

    Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.

  19. Evolution of probability densities in stochastic coupled map lattices

    NASA Astrophysics Data System (ADS)

    Losson, Jérôme; Mackey, Michael C.

    1995-08-01

    This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.

  20. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  1. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  2. Relative abundance, site fidelity, and survival of adult lake trout in Lake Michigan from 1999 to 2001: Implications for future restoration strategies

    USGS Publications Warehouse

    Bronte, C.R.; Holey, M.E.; Madenjian, C.P.; Jonas, J.L.; Claramunt, R.M.; McKee, P.C.; Toneys, M.L.; Ebener, M.P.; Breidert, B.; Fleischer, G.W.; Hess, R.; Martell, A.W.; Olsen, E.J.

    2007-01-01

    We compared the relative abundance of lake trout Salvelinus namaycush spawners in gill nets during fall 1999–2001 in Lake Michigan at 19 stocked spawning sites with that at 25 unstocked sites to evaluate how effective site-specific stocking was in recolonizing historically important spawning reefs. The abundance of adult fish was higher at stocked onshore and offshore sites than at unstocked sites. This suggests that site-specific stocking is more effective at establishing spawning aggregations than relying on the ability of hatchery-reared lake trout to find spawning reefs, especially those offshore. Spawner densities were generally too low and too young at most sites to expect significant natural reproduction. However, densities were sufficiently high at some sites for reproduction to occur and therefore the lack of recruitment was attributable to other factors. Less than 3% of all spawners could have been wild fish, which indicates that little natural reproduction occurred in past years. Wounding by sea lamprey Petromyzon marinus was generally lower for Seneca Lake strain fish and highest for strains from Lake Superior. Fish captured at offshore sites in southern Lake Michigan had the lowest probability of wounding, while fish at onshore sites in northern Lake Michigan had the highest probability. The relative survival of the Seneca Lake strain was higher than that of the Lewis Lake or the Marquette strains for the older year-classes examined. Survival differences among strains were less evident for younger year-classes. Recaptures of coded-wire-tagged fish of five strains indicated that most fish returned to their stocking site or to a nearby site and that dispersal from stocking sites during spawning was about 100 km. Restoration strategies should rely on site-specific stocking of lake trout strains with good survival at selected historically important offshore spawning sites to increase egg deposition and the probability of natural reproduction in Lake Michigan.

  3. A procedure for landslide susceptibility zonation by the conditional analysis method1

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2002-12-01

    Numerous methods have been proposed for landslide probability zonation of the landscape by means of a Geographic Information System (GIS). Among the multivariate methods, i.e. those methods which simultaneously take into account all the factors contributing to instability, the Conditional Analysis method applied to a subdivision of the territory into Unique Condition Units is particularly straightforward from a conceptual point of view and particularly suited to the use of a GIS. In fact, working on the principle that future landslides are more likely to occur under those conditions which led to past instability, landslide susceptibility is defined by computing the landslide density in correspondence with different combinations of instability factors. The conceptual simplicity of this method, however, does not necessarily imply that it is simple to implement, especially as it requires rather complex operations and a high number of GIS commands. Moreover, there is the possibility that, in order to achieve satisfactory results, the procedure has to be repeated a few times changing the factors or modifying the class subdivision. To solve this problem, we created a shell program which, by combining the shell commands, the GIS Geographical Research Analysis Support System (GRASS) commands and the gawk language commands, carries out the whole procedure automatically. This makes the construction of a Landslide Susceptibility Map easy and fast for large areas too, and even when a high spatial resolution is adopted, as shown by application of the procedure to the Parma River basin, in the Italian Northern Apennines.

  4. A class of stochastic delayed SIR epidemic models with generalized nonlinear incidence rate and temporary immunity

    NASA Astrophysics Data System (ADS)

    Fan, Kuangang; Zhang, Yan; Gao, Shujing; Wei, Xiang

    2017-09-01

    A class of SIR epidemic model with generalized nonlinear incidence rate is presented in this paper. Temporary immunity and stochastic perturbation are also considered. The existence and uniqueness of the global positive solution is achieved. Sufficient conditions guaranteeing the extinction and persistence of the epidemic disease are established. Moreover, the threshold behavior is discussed, and the threshold value R0 is obtained. We show that if R0 < 1, the disease eventually becomes extinct with probability one, whereas if R0 > 1, then the system remains permanent in the mean.

  5. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  6. Use of Systematic Methods to Improve Disease Identification in Administrative Data: The Case of Severe Sepsis.

    PubMed

    Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher

    2017-03-01

    Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.

  7. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    PubMed

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  8. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  9. Generalization of cross-modal stimulus equivalence classes: operant processes as components in human category formation.

    PubMed Central

    Lane, S D; Clow, J K; Innis, A; Critchfield, T S

    1998-01-01

    This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680

  10. Canopy structure and tree condition of young, mature, and old-growth Douglas-fir/hardwood forests

    Treesearch

    B.B. Bingham; J.O. Sawyer

    1992-01-01

    Sixty-two Douglas-fir/hardwood stands ranging from 40 to 560 years old were used to characterize the density; diameter, and height class distributions of canopy hardwoods and conifers in young (40 -100 yr), mature (101 - 200 yr) and old-growth (>200 yr) forests. The crown, bole, disease, disturbance, and cavity conditions of canopy conifers and hardwoods were...

  11. Selective inspection planning with ageing forecast for sewer types.

    PubMed

    Baur, R; Herz, R

    2002-01-01

    Investments in sewer rehabilitation must be based on inspection and evaluation of sewer conditions with respect to the severity of sewer damage and to environmental risks. This paper deals with the problems of forecasting the condition of sewers in a network from a small sample of inspected sewers. Transition functions from one into the next poorer condition class, which were empirically derived from this sample, are used to forecast the condition of sewers. By the same procedure, transition functions were subsequently calibrated for sub-samples of different types of sewers. With these transition functions, the most probable date of entering a critical condition class can be forecast from sewer characteristics, such as material, period of construction, location, use for waste and/or storm water, profile, diameter and gradient. Results are shown for the estimates about the actual condition of the Dresden sewer network and its deterioration in case of doing nothing about it. A procedure is proposed for scheduling the inspection dates for sewers which have not yet been inspected and for those which have been inspected before.

  12. Bed structure (frond bleaching, density and biomass) of the red alga Gelidium corneum under different irradiance levels

    NASA Astrophysics Data System (ADS)

    Quintano, E.; Díez, I.; Muguerza, N.; Figueroa, F. L.; Gorostiaga, J. M.

    2017-12-01

    In recent decades a decline in the foundation species Gelidium corneum (Hudson) J. V. Lamouroux has been detected along the Basque coast (northern Spain). This decline has been attributed to several factors, but recent studies have found a relationship between high irradiance and the biochemical and physiological stress of G. corneum. Since physiological responses to changes in light occur well before variations in morphology, the present study seeks to use a size-class demographic approach to investigate whether shallow subtidal populations of G. corneum off the Basque coast show different frond bleaching, density and biomass under different irradiance conditions. The results revealed that the bleaching incidence and cover were positively related to irradiance, whereas biomass was negatively related. The effect of the irradiance level on frond density was found to vary with size-class, i.e. fronds up to 15 cm showed greater densities under high light conditions (126.6 to 262.2 W m- 2) whereas the number of larger fronds (> 20 cm) per unit area was lower. In conclusion, the results of the present study suggest that irradiance might be a key factor for controlling along-shore bleaching, frond density and biomass in G. corneum. Further research should be carried out on the physiology of this canopy species in relation to its bed structure and on the interaction of irradiance and other abiotic (nutrients, temperature, wave energy) and biotic factors (grazing pressure).

  13. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  14. Water quality analysis in rivers with non-parametric probability distributions and fuzzy inference systems: application to the Cauca River, Colombia.

    PubMed

    Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L

    2013-02-01

    The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Quantum fluctuation theorems and generalized measurements during the force protocol.

    PubMed

    Watanabe, Gentaro; Venkatesh, B Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010); Phys. Rev. E 83, 041114 (2011)], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  16. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  17. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  18. Population structure, density and food sources of Terebralia palustris (Potamididae: Gastropoda) in a low intertidal Avicennia marina mangrove stand (Inhaca Island, Mozambique)

    NASA Astrophysics Data System (ADS)

    Penha-Lopes, Gil; Bouillon, Steven; Mangion, Perrine; Macia, Adriano; Paula, José

    2009-09-01

    Population structure and distribution of Terebralia palustris were compared with the environmental parameters within microhabitats in a monospecific stand of Avicennia marina in southern Mozambique. Stable carbon and nitrogen isotope analyses of T. palustris and potential food sources (leaves, pneumatophore epiphytes, and surface sediments) were examined to establish the feeding preferences of T. palustris. Stable isotope signatures of individuals of different size classes and from different microhabitats were compared with local food sources. Samples of surface sediments 2.5-10 m apart showed some variation (-21.2‰ to -23.0‰) in δ13C, probably due to different contributions from seagrasses, microalgae and mangrove leaves, while δ15N values varied between 8.7‰ and 15.8‰, indicating that there is a very high variability within a small-scale microcosm. Stable isotope signatures differed significantly between the T. palustris size classes and between individuals of the same size class, collected in different microhabitats. Results also suggested that smaller individuals feed on sediment, selecting mainly benthic microalgae, while larger individuals feed on sediment, epiphytes and mangrove leaves. Correlations were found between environmental parameters and gastropod population structure and distribution vs. the feeding preferences of individuals of different size classes and in different microhabitats. While organic content and the abundance of leaves were parameters that correlated best with the total density of gastropods (>85%), the abundance of pneumatophores and leaves, as well as grain size, correlated better with the gastropod size distribution (>65%). Young individuals (height < 3 cm) occur predominantly in microhabitats characterized by a low density of leaf litter and pneumatophores, reduced organic matter and larger grain size, these being characteristic of lower intertidal open areas that favour benthic microalgal growth. With increasing shell height, T. palustris individuals start occupying microhabitats nearer the mangrove trees characterized by large densities of pneumatophores and litter, as well as sediments of smaller grain size, leading to higher organic matter availability in the sediment.

  19. Monte Carlo based protocol for cell survival and tumour control probability in BNCT.

    PubMed

    Ye, S J

    1999-02-01

    A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the 10B(n,alpha)7Li reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the 10B(n,alpha)7Li reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of 10(-3)-10(-5) for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).

  20. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  1. Materials separation by dielectrophoresis

    NASA Technical Reports Server (NTRS)

    Sagar, A. D.; Rose, R. M.

    1988-01-01

    The feasibility of vacuum dielectrophoresis as a method for particulate materials separation in a microgravity environment was investigated. Particle separations were performed in a specially constructed miniature drop-tower with a residence time of about 0.3 sec. Particle motion in such a system is independent of size and based only on density and dielectric constant, for a given electric field. The observed separations and deflections exceeded the theoretical predictions, probably due to multiparticle effects. In any case, this approach should work well in microgravity for many classes of materials, with relatively simple apparatus and low weight and power requirements.

  2. Forming Super-Puffs Beyond 1 AU

    NASA Astrophysics Data System (ADS)

    Lee, Eve J.; Chiang, Eugene

    2017-06-01

    Super-puffs are an uncommon class of short-period planets seemingly too voluminous for their small masses (4-10 Rearth, 2-6 Mearth). Super-puffs most easily acquire their thick atmospheres as dust-free, rapidly cooling worlds outside ˜1AU where nebular gas is colder, less dense, and therefore less opaque. These puffy planets probably migrated in to their current orbits; they are expected to form the outer links of mean-motion resonant chains, and to exhibit atmospheric characteristics consistent with formation at large distances. I will also discuss, in general, how densities of planets can be used to infer their formation locations.

  3. PYFLOW 2.0. A new open-source software for quantifying the impact and depositional properties of dilute pyroclastic density currents

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2017-04-01

    Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46

  4. A closer look at the probabilities of the notorious three prisoners.

    PubMed

    Falk, R

    1992-06-01

    The "problem of three prisoners", a counterintuitive teaser, is analyzed. It is representative of a class of probability puzzles where the correct solution depends on explication of underlying assumptions. Spontaneous beliefs concerning the problem and intuitive heuristics are reviewed. The psychological background of these beliefs is explored. Several attempts to find a simple criterion to predict whether and how the probability of the target event will change as a result of obtaining evidence are examined. However, despite the psychological appeal of these attempts, none proves to be valid in general. A necessary and sufficient condition for change in the probability of the target event, following observation of new data, is proposed. That criterion is an extension of the likelihood-ratio principle (which holds in the case of only two complementary alternatives) to any number of alternatives. Some didactic implications concerning the significance of the chance set-up and reliance on analogies are discussed.

  5. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  6. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  7. Spatial patterns and biodiversity in off-lattice simulations of a cyclic three-species Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.

    2018-02-01

    Stochastic simulations of cyclic three-species spatial predator-prey models are usually performed in square lattices with nearest-neighbour interactions starting from random initial conditions. In this letter we describe the results of off-lattice Lotka-Volterra stochastic simulations, showing that the emergence of spiral patterns does occur for sufficiently high values of the (conserved) total density of individuals. We also investigate the dynamics in our simulations, finding an empirical relation characterizing the dependence of the characteristic peak frequency and amplitude on the total density. Finally, we study the impact of the total density on the extinction probability, showing how a low population density may jeopardize biodiversity.

  8. Influence of anisotropic turbulence on the orbital angular momentum modes of Hermite-Gaussian vortex beam in the ocean.

    PubMed

    Li, Ye; Yu, Lin; Zhang, Yixin

    2017-05-29

    Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.

  9. Estimating trends in alligator populations from nightlight survey data

    USGS Publications Warehouse

    Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.

  10. Tiger in the fault tree jungle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, P.

    1976-01-01

    There is yet little evidence of serious efforts to apply formal reliability analysis methods to evaluate, or even to identify, potential common-mode failures (CMF) of reactor safeguard systems. The prospects for event logic modeling in this regard are examined by the primitive device of reviewing actual CMF experience in terms of what the analyst might have perceived a priori. Further insights of the probability and risks aspects of CMFs are sought through consideration of three key likelihood factors: (1) prior probability of cause ever existing, (2) opportunities for removing cause, and (3) probability that a CMF cause will be activatedmore » by conditions associated with a real system challenge. It was concluded that the principal needs for formal logical discipline in the endeavor to decrease CMF-related risks are to discover and to account for strong ''energetic'' dependency couplings that could arise in the major accidents usually classed as ''hypothetical.'' This application would help focus research, design and quality assurance efforts to cope with major CMF causes. But without extraordinary challenges to the reactor safeguard systems, there must continue to be virtually no statistical evidence pertinent to that class of failure dependencies.« less

  11. Estimating trends in alligator populations from nightlight survey data

    USGS Publications Warehouse

    Fujisaki, Ikuko; Mazzotti, F.J.; Dorazio, R.M.; Rice, K.G.; Cherkiss, M.; Jeffery, B.

    2011-01-01

    Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001-2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations. ?? 2011 US Government.

  12. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  13. Proposal to evaluate the use of ERTS-A imagery in mapping and managing soil and range resources in the Sand Hills Region of Nebraska

    NASA Technical Reports Server (NTRS)

    Drew, J. V. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. There appears to be a direct relationship between densitometry values obtained with MSS band 5 imagery and forage density for those range sites measured on the imagery, provided site category identification is indicated by other forms of imagery or ground truth. Overlap of density values for different site categories with differing forage condition classes does not allow assigning a given forage density value for a given densitometer value unless the range site category is known.

  14. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  15. Measurement error in earnings data: Using a mixture model approach to combine survey and register data.

    PubMed

    Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.

  16. Equilibrium problems for Raney densities

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng; Zinn-Justin, Paul

    2015-07-01

    The Raney numbers are a class of combinatorial numbers generalising the Fuss-Catalan numbers. They are indexed by a pair of positive real numbers (p, r) with p > 1 and 0 < r ⩽ p, and form the moments of a probability density function. For certain (p, r) the latter has the interpretation as the density of squared singular values for certain random matrix ensembles, and in this context equilibrium problems characterising the Raney densities for (p, r) = (θ + 1, 1) and (θ/2 + 1, 1/2) have recently been proposed. Using two different techniques—one based on the Wiener-Hopf method for the solution of integral equations and the other on an analysis of the algebraic equation satisfied by the Green's function—we establish the validity of the equilibrium problems for general θ > 0 and similarly use both methods to identify the equilibrium problem for (p, r) = (θ/q + 1, 1/q), θ > 0 and q \\in Z+ . The Wiener-Hopf method is used to extend the latter to parameters (p, r) = (θ/q + 1, m + 1/q) for m a non-negative integer, and also to identify the equilibrium problem for a family of densities with moments given by certain binomial coefficients.

  17. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  18. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  19. Settlement, mortality and growth of the asari clam (Ruditapes philippinarum) for a collapsed population on a tidal flat in Nakatsu, Japan

    NASA Astrophysics Data System (ADS)

    Tezuka, Naoaki; Kamimura, Satomi; Hamaguchi, Masami; Saito, Hajime; Iwano, Hideki; Egashira, Junichi; Fukuda, Yuichi; Tawaratsumida, Takahiko; Nagamoto, Atsushi; Nakagawa, Koichi

    2012-04-01

    Although fluctuation and decline in bivalve populations have been reported worldwide, the underlying processes are not yet fully understood. This lack of understanding is partly due to an absence of demographic information for the early post-settlement period. This is the case particularly for annual production of the asari clam (also commonly known as the Manila clam, Ruditapes philippinarum) in Japan, which has greatly decreased in recent years. A remarkable decrease has been observed in the Nakatsu tidal flat, where current yields are less than 0.02% of the maximum yield. Possible explanations for this decline are: 1. limitation on recruitment due to overfishing; and 2. the demographic processes of growth and mortality have been altered by environmental changes, such as rise in seawater temperature or decrease in phytoplankton abundance. However, because of a lack of demographic information (e.g., the initial densities of larval settlement and mortality and growth rates post-settlement), the reasons for the decline, and the relative importance of each period in the life cycle in determining population abundance, remain unclear. Despite the decline, we observed high levels of recruitment of 0-year-class clams on the Nakatsu tidal flat in spring 2005, where more than 10,000 individuals m- 2 3-5 mm in shell length, estimated to have settled during the previous autumn, were observed. To obtain demographic information on the Nakatsu clams, we investigated two factors. First, we investigated the distribution of the 0-year-class clams and their rate of change in density as a combination of mortality, emigration and immigration on the whole tidal flat after a year. Second, we investigated the rate of change in the density and growth of clams after settlement in the center of the flat for 3 years. The rate of decrease in the density of the 0-year-class clams over the whole tidal flat after a year was greater at the stations where the initial density was higher. This suggests that density-dependent processes such as predation or competition may affect population levels. In the center of the flat, the initial density of settlement was more stable than the rate of decrease after settlement. These results suggest that the clam population on this tidal flat is probably suppressed by variable but high mortality rates after settlement, not by recruitment limitation.

  20. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  1. Measures of dependence for multivariate Lévy distributions

    NASA Astrophysics Data System (ADS)

    Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.

    2001-02-01

    Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.

  2. UV, optical and infrared properties of star forming galaxies

    NASA Technical Reports Server (NTRS)

    Huchra, John P.

    1987-01-01

    The UVOIR properties of galaxies with extreme star formation rates are examined. These objects seem to fall into three distinct classes which can be called (1) extragalactic H II regions, (2) clumpy irregulars, and (3) starburst galaxies. Extragalactic H II regions are dominated by recently formed stars and may be considered 'young' galaxies if the definition of young is having the majority of total integrated star formation occurring in the last billion years. Clumpy irregulars are bursts of star formation superposed on an old population and are probably good examples of stochastic star formation. It is possible that star formation in these galaxies is triggered by the infall of gas clouds or dwarf companions. Starburst galaxies are much more luminous, dustier and more metal rich than the other classes. These objects show evidence for shock induced star formation where shocks may be caused by interaction with massive companions or are the result of an extremely strong density wave.

  3. Modeling take-over performance in level 3 conditionally automated vehicles.

    PubMed

    Gold, Christian; Happee, Riender; Bengler, Klaus

    2018-07-01

    Taking over vehicle control from a Level 3 conditionally automated vehicle can be a demanding task for a driver. The take-over determines the controllability of automated vehicle functions and thereby also traffic safety. This paper presents models predicting the main take-over performance variables take-over time, minimum time-to-collision, brake application and crash probability. These variables are considered in relation to the situational and driver-related factors time-budget, traffic density, non-driving-related task, repetition, the current lane and driver's age. Regression models were developed using 753 take-over situations recorded in a series of driving simulator experiments. The models were validated with data from five other driving simulator experiments of mostly unrelated authors with another 729 take-over situations. The models accurately captured take-over time, time-to-collision and crash probability, and moderately predicted the brake application. Especially the time-budget, traffic density and the repetition strongly influenced the take-over performance, while the non-driving-related tasks, the lane and drivers' age explained a minor portion of the variance in the take-over performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Inter-class competition in stage-structured populations: effects of adult density on life-history traits of adult and juvenile common lizards.

    PubMed

    San-Jose, Luis M; Peñalver-Alcázar, Miguel; Huyghe, Katleen; Breedveld, Merel C; Fitze, Patrick S

    2016-12-01

    Ecological and evolutionary processes in natural populations are largely influenced by the population's stage-structure. Commonly, different classes have different competitive abilities, e.g., due to differences in body size, suggesting that inter-class competition may be important and largely asymmetric. However, experimental evidence states that inter-class competition, which is important, is rare and restricted to marine fish. Here, we manipulated the adult density in six semi-natural populations of the European common lizard, Zootoca vivipara, while holding juvenile density constant. Adult density affected juveniles, but not adults, in line with inter-class competition. High adult density led to lower juvenile survival and growth before hibernation. In contrast, juvenile survival after hibernation was higher in populations with high adult density, pointing to relaxed inter-class competition. As a result, annual survival was not affected by adult density, showing that differences in pre- and post-hibernation survival balanced each other out. The intensity of inter-class competition affected reproduction, performance, and body size in juveniles. Path analyses unravelled direct treatment effects on early growth (pre-hibernation) and no direct treatment effects on the parameters measured after hibernation. This points to allometry of treatment-induced differences in early growth, and it suggests that inter-class competition mainly affects the early growth of the competitively inferior class and thereby their future performance and reproduction. These results are in contrast with previous findings and, together with results in marine fish, suggest that the strength and direction of density dependence may depend on the degree of inter-class competition, and thus on the availability of resources used by the competing classes.

  5. The job content questionnaire in various occupational contexts: applying a latent class model

    PubMed Central

    Santos, Kionna Oliveira Bernardes; de Araújo, Tânia Maria; Karasek, Robert

    2017-01-01

    Objective To evaluate Job Content Questionnaire(JCQ) performance using the latent class model. Methods We analysed cross-sectional studies conducted in Brazil and examined three occupational categories: petroleum industry workers (n=489), teachers (n=4392) and primary healthcare workers (3078)and 1552 urban workers from a representative sample of the city of Feira de Santana in Bahia, Brazil. An appropriate number of latent classes was extracted and described each occupational category using latent class analysis, a multivariate method that evaluates constructs and takes into account the latent characteristics underlying the structure of measurement scales. The conditional probabilities of workers belonging to each class were then analysed graphically. Results Initially, the latent class analysis extracted four classes corresponding to the four job types (active, passive, low strain and high strain) proposed by the Job-Strain model (JSM) and operationalised by the JCQ. However, after taking into consideration the adequacy criteria to evaluate the number of extracted classes, three classes (active, low strain and high strain) were extracted from the studies of urban workers and teachers and four classes (active, passive, low strain and high strain) from the study of primary healthcare and petroleum industry workers. Conclusion The four job types proposed by the JSM were identified among primary healthcare and petroleum industry workers—groups with relatively high levels of skill discretion and decision authority. Three job types were identified for teachers and urban workers; however, passive job situations were not found within these groups. The latent class analysis enabled us to describe the conditional standard responses of the job types proposed by the model, particularly in relation to active jobs and high and low strain situations. PMID:28515185

  6. Cellular automata models for diffusion of information and highway traffic flow

    NASA Astrophysics Data System (ADS)

    Fuks, Henryk

    In the first part of this work we study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents degree of 'anticipatory driving'. We compare two driving strategies with identical maximum throughput: 'conservative' driving with high speed limit and 'anticipatory' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered. For rule 184, we present exact calculations of the order parameter in a transition from the moving phase to the jammed phase using the method of preimage counting, and use this result to construct a solution to the density classification problem. In the second part we propose a probabilistic cellular automaton model for the spread of innovations, rumors, news, etc., in a social system. We start from simple deterministic models, for which exact expressions for the density of adopters are derived. For a more realistic model, based on probabilistic cellular automata, we study the influence of a range of interaction R on the shape of the adoption curve. When the probability of adoption is proportional to the local density of adopters, and individuals can drop the innovation with some probability p, the system exhibits a second order phase transition. Critical line separating regions of parameter space in which asymptotic density of adopters is positive from the region where it is equal to zero converges toward the mean-field line when the range of the interaction increases. In a region between R=1 critical line and the mean-field line asymptotic density of adopters depends on R, becoming zero if R is too small (smaller than some critical value). This result demonstrates the importance of connectivity in diffusion of information. We also define a new class of automata networks which incorporates non-local interactions, and discuss its applicability in modeling of diffusion of innovations.

  7. Probabilistic-driven oriented Speckle reducing anisotropic diffusion with application to cardiac ultrasonic images.

    PubMed

    Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C

    2010-01-01

    A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.

  8. Does probability of occurrence relate to population dynamics?

    USGS Publications Warehouse

    Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

    2014-01-01

    Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.

  9. Perceived risk associated with ecstasy use: a latent class analysis approach

    PubMed Central

    Martins, SS; Carlson, RG; Alexandre, PK; Falck, RS

    2011-01-01

    This study aims to define categories of perceived health problems among ecstasy users based on observed clustering of their perceptions of ecstasy-related health problems. Data from a community sample of ecstasy users (n=402) aged 18 to 30, in Ohio, was used in this study. Data was analyzed via Latent Class Analysis (LCA) and Regression. This study identified five different subgroups of ecstasy users based on their perceptions of health problems they associated with their ecstasy use. Almost one third of the sample (28.9%) belonged to a class with “low level of perceived problems” (Class 4). About one fourth (25.6%) of the sample (Class 2), had high probabilities of “perceiving problems on sexual-related items”, but generally low or moderate probabilities of perceiving problems in other areas. Roughly one-fifth of the sample (21.1%, Class 1) had moderate probabilities of perceiving ecstasy health-related problems in all areas. A small proportion of respondents (11.9%, Class 5) had high probabilities of reporting “perceived memory and cognitive problems, and of perceiving “ecstasy related-problems in all areas” (12.4%, Class 3). A large proportion of ecstasy users perceive either low or moderate risk associated with their ecstasy use. It is important to further investigate whether lower levels of risk perception are associated with persistence of ecstasy use. PMID:21296504

  10. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  11. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  12. Global warming precipitation accumulation increases above the current-climate cutoff scale

    PubMed Central

    Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-01-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693

  13. Global warming precipitation accumulation increases above the current-climate cutoff scale

    NASA Astrophysics Data System (ADS)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-02-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  14. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  15. Global warming precipitation accumulation increases above the current-climate cutoff scale.

    PubMed

    Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N

    2017-02-07

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  16. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE PAGES

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...

    2017-01-23

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  17. A simple method to calculate first-passage time densities with arbitrary initial conditions

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig

    2016-06-01

    Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.

  18. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  19. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  20. Artificial neural networks for the diagnosis of aggressive periodontitis trained by immunologic parameters.

    PubMed

    Papantonopoulos, Georgios; Takahashi, Keiso; Bountis, Tasos; Loos, Bruno G

    2014-01-01

    There is neither a single clinical, microbiological, histopathological or genetic test, nor combinations of them, to discriminate aggressive periodontitis (AgP) from chronic periodontitis (CP) patients. We aimed to estimate probability density functions of clinical and immunologic datasets derived from periodontitis patients and construct artificial neural networks (ANNs) to correctly classify patients into AgP or CP class. The fit of probability distributions on the datasets was tested by the Akaike information criterion (AIC). ANNs were trained by cross entropy (CE) values estimated between probabilities of showing certain levels of immunologic parameters and a reference mode probability proposed by kernel density estimation (KDE). The weight decay regularization parameter of the ANNs was determined by 10-fold cross-validation. Possible evidence for 2 clusters of patients on cross-sectional and longitudinal bone loss measurements were revealed by KDE. Two to 7 clusters were shown on datasets of CD4/CD8 ratio, CD3, monocyte, eosinophil, neutrophil and lymphocyte counts, IL-1, IL-2, IL-4, INF-γ and TNF-α level from monocytes, antibody levels against A. actinomycetemcomitans (A.a.) and P.gingivalis (P.g.). ANNs gave 90%-98% accuracy in classifying patients into either AgP or CP. The best overall prediction was given by an ANN with CE of monocyte, eosinophil, neutrophil counts and CD4/CD8 ratio as inputs. ANNs can be powerful in classifying periodontitis patients into AgP or CP, when fed by CE values based on KDE. Therefore ANNs can be employed for accurate diagnosis of AgP or CP by using relatively simple and conveniently obtained parameters, like leukocyte counts in peripheral blood. This will allow clinicians to better adapt specific treatment protocols for their AgP and CP patients.

  1. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  2. Shear coaxial injector atomization phenomena for combusting and non-combusting conditions

    NASA Technical Reports Server (NTRS)

    Pal, S.; Moser, M. D.; Ryan, H. M.; Foust, M. J.; Santoro, R. J.

    1992-01-01

    Measurements of LOX drop size and velocity in a uni-element liquid propellant rocket chamber are presented. The use of the Phase Doppler Particle Analyzer in obtaining temporally-averaged probability density functions of drop size in a harsh rocket environment has been demonstrated. Complementary measurements of drop size/velocity for simulants under cold flow conditions are also presented. The drop size/velocity measurements made for combusting and cold flow conditions are compared, and the results indicate that there are significant differences in the two flowfields.

  3. Tailoring community-based wellness initiatives with latent class analysis--Massachusetts Community Transformation Grant projects.

    PubMed

    Arcaya, Mariana; Reardon, Timothy; Vogel, Joshua; Andrews, Bonnie K; Li, Wenjun; Land, Thomas

    2014-02-13

    Community-based approaches to preventing chronic diseases are attractive because of their broad reach and low costs, and as such, are integral components of health care reform efforts. Implementing community-based initiatives across Massachusetts' municipalities presents both programmatic and evaluation challenges. For effective delivery and evaluation of the interventions, establishing a community typology that groups similar municipalities provides a balanced and cost-effective approach. Through a series of key informant interviews and exploratory data analysis, we identified 55 municipal-level indicators of 6 domains for the typology analysis. The domains were health behaviors and health outcomes, housing and land use, transportation, retail environment, socioeconomics, and demographic composition. A latent class analysis was used to identify 10 groups of municipalities based on similar patterns of municipal-level indicators across the domains. Our model with 10 latent classes yielded excellent classification certainty (relative entropy = .995, minimum class probability for any class = .871), and differentiated distinct groups of municipalities based on health-relevant needs and resources. The classes differentiated healthy and racially and ethnically diverse urban areas from cities with similar population densities and diversity but worse health outcomes, affluent communities from lower-income rural communities, and mature suburban areas from rapidly suburbanizing communities with different healthy-living challenges. Latent class analysis is a tool that may aid in the planning, communication, and evaluation of community-based wellness initiatives such as Community Transformation Grants projects administrated by the Centers for Disease Control and Prevention.

  4. A new exact anisotropic solution of embedding class one

    NASA Astrophysics Data System (ADS)

    Maurya, S. K.; Gupta, Y. K.; T. T., Smitha; Rahaman, Farook

    2016-07-01

    We have presented a new anisotropic solution of Einstein's field equations for compact-star models. Einstein's field equations are solved by using the class-one condition (S.N. Pandey, S.P. Sharma, Gen. Relativ. Gravit. 14, 113 (1982)). We constructed the expression for the anisotropy factor ( Δ by using the pressure anisotropy condition and thereafter we obtained the physical parameters like energy density, radial and transverse pressure. These models parameters are well-behaved inside the star and satisfy all the required physical conditions. Also we observed the very interesting result that all physical parameters depend upon the anisotropy factor ( Δ. The mass and radius of the present compact-star models are quite compatible with the observational astrophysical compact stellar objects like Her X-1, RXJ 1856-37, SAX J1808.4-3658(SS1), SAX J1808.4-3658(SS2).

  5. Quantum fluctuation theorems and generalized measurements during the force protocol

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010), 10.1103/PhysRevLett.105.140601; Phys. Rev. E 83, 041114 (2011), 10.1103/PhysRevE.83.041114], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  6. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  7. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  8. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  9. Molecular-Level Study of the Effect of Prior Axial Compression/Torsion on the Axial-Tensile Strength of PPTA Fibers

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.

    2013-11-01

    A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.

  10. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  11. Density matrix approach to the hot-electron stimulated photodesorption

    NASA Astrophysics Data System (ADS)

    Kühn, Oliver; May, Volkhard

    1996-07-01

    The dissipative dynamics of the laser-induced nonthermal desorption of small molecules from a metal surface is investigated here. Based on the density matrix formalism a multi-state model is introduced which explicitly takes into account the continuum of electronic states in the metal. Various relaxation mechanisms for the electronic degrees of freedom are shown to govern the desorption dynamics and hence the desorption probability. Particular attention is paid to the modeling of the time dependence of the electron energy distribution in the metal which reflects different excitation conditions.

  12. The structure of western warbler assemblages: Analysis of foraging behavior and habitat selection in Oregon

    USGS Publications Warehouse

    Morrison, Michael L.

    1981-01-01

    This study examines the foraging behavior and habitat selection of a MacGillivray's (Oporornis tolmiei)-Orange-crowned (Vermivora celata)-Wilson's (Wilsonia pusilla) warbler assemblage that occurred on early-growth clearcuts in western Oregon during breeding. Sites were divided into two groups based on the presence or absence of deciduous trees. Density estimates for each species were nearly identical between site classes except for Wilson's, whose density declined on nondeciduous tree sites. Analysis of vegetation parameters within the territories of the species identified deciduous tree cover as the variable of primary importance in the separation of warblers on each site, so that the assemblage could be arranged on a continuum of increasing deciduous tree cover. MacGillivray's and Wilson's extensively used shrub cover and deciduous tree cover, respectively; Orange-crowns were associated with both vegetation types. When the deciduous tree cover was reduced, Orange-crowns concentrated foraging activities in shrub cover and maintained nondisturbance densities. Indices of foraging-height diversity showed a marked decrease after the removal of deciduous trees. All species except MacGillivray's foraged lower in the vegatative substrate on the nondeciduous tree sites; MacGillivray's concentrated foraging activities in the low shrub cover on both sites. Indices of foraging overlap revealed a general pattern of decreased segregation by habitat after removal of deciduous trees. I suggest that the basic patterns of foraging behavior and habitat selection evidenced today in western North America were initially developed by ancestral warblers before their invasion of the west. Species successfully colonizing western habitats were probably preadapted to the conditions they encountered, with new habitats occupied without obvious evolutionary modifications.

  13. On the Estimation of Disease Prevalence by Latent Class Models for Screening Studies Using Two Screening Tests with Categorical Disease Status Verified in Test Positives Only

    PubMed Central

    Chu, Haitao; Zhou, Yijie; Cole, Stephen R.; Ibrahim, Joseph G.

    2010-01-01

    Summary To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional independent and three homogeneous conditional dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous dependent models can provide identical goodness-of-fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not “testable” from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non-identifiable model potentially involving heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. PMID:20191614

  14. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  15. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  16. A comprehensive risk analysis of coastal zones in China

    NASA Astrophysics Data System (ADS)

    Wang, Guanghui; Liu, Yijun; Wang, Hongbing; Wang, Xueying

    2014-03-01

    Although coastal zones occupy an important position in the world development, they face high risks and vulnerability to natural disasters because of their special locations and their high population density. In order to estimate their capability for crisis-response, various models have been established. However, those studies mainly focused on natural factors or conditions, which could not reflect the social vulnerability and regional disparities of coastal zones. Drawing lessons from the experiences of the United Nations Environment Programme (UNEP), this paper presents a comprehensive assessment strategy based on the mechanism of Risk Matrix Approach (RMA), which includes two aspects that are further composed of five second-class indicators. The first aspect, the probability phase, consists of indicators of economic conditions, social development, and living standards, while the second one, the severity phase, is comprised of geographic exposure and natural disasters. After weighing all of the above indicators by applying the Analytic Hierarchy Process (AHP) and Delphi Method, the paper uses the comprehensive assessment strategy to analyze the risk indices of 50 coastal cities in China. The analytical results are presented in ESRI ArcGis10.1, which generates six different risk maps covering the aspects of economy, society, life, environment, disasters, and an overall assessment of the five areas. Furthermore, the study also investigates the spatial pattern of these risk maps, with detailed discussion and analysis of different risks in coastal cities.

  17. On the radiation mechanism of repeating fast radio bursts

    NASA Astrophysics Data System (ADS)

    Lu, Wenbin; Kumar, Pawan

    2018-06-01

    Recent observations show that fast radio bursts (FRBs) are energetic but probably non-catastrophic events occurring at cosmological distances. The properties of their progenitors are largely unknown in spite of many attempts to determine them using the event rate, duration, and energetics. Understanding the radiation mechanism for FRBs should provide the missing insights regarding their progenitors, which is investigated in this paper. The high brightness temperatures (≳1035 K) of FRBs mean that the emission process must be coherent. Two general classes of coherent radiation mechanisms are considered - maser and the antenna mechanism. We use the observed properties of the repeater FRB 121102 to constrain the plasma conditions needed for these two mechanisms. We have looked into a wide variety of maser mechanisms operating in either vacuum or plasma and find that none of them can explain the high luminosity of FRBs without invoking unrealistic or fine-tuned plasma conditions. The most favourable mechanism is antenna curvature emission by coherent charge bunches where the burst is powered by magnetic reconnection near the surface of a magnetar (B ≳ 1014 G). We show that the plasma in the twisted magnetosphere of a magnetar may be clumpy due to two-stream instability. When magnetic reconnection occurs, the pre-existing density clumps may provide charge bunches for the antenna mechanism to operate. This model should be applicable to all FRBs that have multiple outbursts like FRB 121102.

  18. Systematic review: Efficacy and safety of medical marijuana in selected neurologic disorders

    PubMed Central

    Koppel, Barbara S.; Brust, John C.M.; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-01-01

    Objective: To determine the efficacy of medical marijuana in several neurologic conditions. Methods: We performed a systematic review of medical marijuana (1948–November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Results: Thirty-four studies met inclusion criteria; 8 were rated as Class I. Conclusions: The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non–chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications. PMID:24778283

  19. Systematic review: efficacy and safety of medical marijuana in selected neurologic disorders: report of the Guideline Development Subcommittee of the American Academy of Neurology.

    PubMed

    Koppel, Barbara S; Brust, John C M; Fife, Terry; Bronstein, Jeff; Youssof, Sarah; Gronseth, Gary; Gloss, David

    2014-04-29

    To determine the efficacy of medical marijuana in several neurologic conditions. We performed a systematic review of medical marijuana (1948-November 2013) to address treatment of symptoms of multiple sclerosis (MS), epilepsy, and movement disorders. We graded the studies according to the American Academy of Neurology classification scheme for therapeutic articles. Thirty-four studies met inclusion criteria; 8 were rated as Class I. The following were studied in patients with MS: (1) Spasticity: oral cannabis extract (OCE) is effective, and nabiximols and tetrahydrocannabinol (THC) are probably effective, for reducing patient-centered measures; it is possible both OCE and THC are effective for reducing both patient-centered and objective measures at 1 year. (2) Central pain or painful spasms (including spasticity-related pain, excluding neuropathic pain): OCE is effective; THC and nabiximols are probably effective. (3) Urinary dysfunction: nabiximols is probably effective for reducing bladder voids/day; THC and OCE are probably ineffective for reducing bladder complaints. (4) Tremor: THC and OCE are probably ineffective; nabiximols is possibly ineffective. (5) Other neurologic conditions: OCE is probably ineffective for treating levodopa-induced dyskinesias in patients with Parkinson disease. Oral cannabinoids are of unknown efficacy in non-chorea-related symptoms of Huntington disease, Tourette syndrome, cervical dystonia, and epilepsy. The risks and benefits of medical marijuana should be weighed carefully. Risk of serious adverse psychopathologic effects was nearly 1%. Comparative effectiveness of medical marijuana vs other therapies is unknown for these indications.

  20. An Experiment in Voice Data Entry for Imagery Interpretation Reporting.

    DTIC Science & Technology

    1981-03-01

    INTERCEPTORS 219 KOTLIN CLASS KOTLIN CLASS- 22e KOTLN SAM CL KOTLIN -SAM CLASS 221 SKORY CLASS SKORY CLASS_ 222 RIVA CLASS RIGA CLASS 223 GRISHA CLASS GRISHA...INTERCEPTORS ----------------------------------------- ---------------------- INSTALLATION 0362-V34273 *2 PROBABLE SKORY CLASS DESTROYERS *3 CONFIRMED KOTLIN ...CLASS TORPEDO BOATS! 4 CONFIRMED KOTLIN SAM-CLASS DETSTROYERS

  1. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  2. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  3. The influence of alewife year-class strength on prey selection and abundance of age-1 Chinook salmon in Lake Michigan

    USGS Publications Warehouse

    Warner, D.M.; Kiley, C.S.; Claramunt, R.M.; Clapp, D.F.

    2008-01-01

    We used growth and diet data from a fishery-independent survey of Chinook salmon Oncorhynchus tshawytscha, acoustic estimates of prey density and biomass, and statistical catch-at-age modeling to study the influence of the year-class strength of alewife Alosa pseudoharengus on the prey selection and abundance of age-1 Chinook salmon in Lake Michigan during the years 1992-1996 and 2001-2005. Alewives age 2 or younger were a large part of age-1 Chinook salmon diets but were not selectively fed upon by age-1 Chinook salmon in most years. Feeding by age-1 Chinook salmon on alewives age 2 or younger became selective as the biomass of alewives in that young age bracket increased, and age-1 Chinook salmon also fed selectively on young bloaters Coregonus hoyi when bloater density was high. Selection of older alewives decreased at high densities of alewives age 2 or younger and, in some cases, high densities of bloater. The weight and condition of age-1 Chinook salmon were not related to age-1 Chinook salmon abundance or prey abundance, but the abundance of age-1 Chinook salmon in year t was positively related to the density of age-0 alewives in year t - 1. Our results suggest that alewife year-class strength exerts a positive bottom-up influence on age-1 Chinook salmon abundance, prey switching behavior by young Chinook salmon contributing to the stability of the predator-prey relationship between Chinook salmon and alewives. ?? Copyright by the American Fisheries Society 2008.

  4. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Massada, Avi Bar; Radeloff, Volker C.; Stewart, Susan I.; Hawbaker, Todd J.

    2009-01-01

    The rapid growth of housing in and near the wildland–urban interface (WUI) increases wildfirerisk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfirerisk to a 60,000 ha WUI area in northwesternWisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfirerisk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfirerisk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfirerisk and those most vulnerable under extreme weather conditions.

  5. Connectivity in an agricultural landscape as reflected by interpond movements of a freshwater turtle

    USGS Publications Warehouse

    Bowne, D.R.; Bowers, M.A.; Hines, J.E.

    2006-01-01

    Connectivity is a measure of how landscape features facilitate movement and thus is an important factor in species persistence in a fragmented landscape. The scarcity of empirical studies that directly quantify species movement and determine subsequent effects on population density have, however, limited the utility of connectivity measures in conservation planning. We undertook a 4-year study to calculate connectivity based on observed movement rates and movement probabilities for five age-sex classes of painted turtles (Chrysemys picta) inhabiting a pond complex in an agricultural landscape in northern Virginia (U.S.A.). We determined which variables influenced connectivity and the relationship between connectivity and subpopulation density. Interpatch distance and quality of habitat patches influenced connectivity but characteristics of the intervening matrix did not. Adult female turtles were more influenced by the habitat quality of recipient ponds than other age-sex classes. The importance of connectivity on spatial population dynamics was most apparent during a drought. Population density and connectivity were low for one pond in a wet year but dramatically increased as other ponds dried. Connectivity is an important component of species persistence in a heterogeneous landscape and is strongly dependent on the movement behavior of the species. Connectivity may reflect active selection or avoidance of particular habitat patches. The influence of habitat quality on connectivity has often been ignored, but our findings highlight its importance. Conservation planners seeking to incorporate connectivity measures into reserve design should not ignore behavior in favor of purely structural estimates of connectivity.

  6. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  7. The electron localization as the information content of the conditional pair density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbina, Andres S.; Torres, F. Javier; Universidad San Francisco de Quito

    2016-06-28

    In the present work, the information gained by an electron for “knowing” about the position of another electron with the same spin is calculated using the Kullback-Leibler divergence (D{sub KL}) between the same-spin conditional pair probability density and the marginal probability. D{sub KL} is proposed as an electron localization measurement, based on the observation that regions of the space with high information gain can be associated with strong correlated localized electrons. Taking into consideration the scaling of D{sub KL} with the number of σ-spin electrons of a system (N{sup σ}), the quantity χ = (N{sup σ} − 1) D{sub KL}f{submore » cut} is introduced as a general descriptor that allows the quantification of the electron localization in the space. f{sub cut} is defined such that it goes smoothly to zero for negligible densities. χ is computed for a selection of atomic and molecular systems in order to test its capability to determine the region in space where electrons are localized. As a general conclusion, χ is able to explain the electron structure of molecules on the basis of chemical grounds with a high degree of success and to produce a clear differentiation of the localization of electrons that can be traced to the fluctuation in the average number of electrons in these regions.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  9. Statistics of velocity gradients in two-dimensional Navier-Stokes and ocean turbulence.

    PubMed

    Schorghofer, Norbert; Gille, Sarah T

    2002-02-01

    Probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two-dimensional Navier-Stokes equations. Ocean data are derived from TOPEX satellite altimeter measurements. The simulations use rapid forcing on large scales, characteristic of surface winds. The probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, although they differ from unforced simulations reported elsewhere. The distribution and cross correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics.

  10. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  11. A climatology of polar stratospheric cloud composition between 2002 and 2012 based on MIPAS/Envisat observations

    NASA Astrophysics Data System (ADS)

    Spang, Reinhold; Hoffmann, Lars; Müller, Rolf; Grooß, Jens-Uwe; Tritscher, Ines; Höpfner, Michael; Pitts, Michael; Orr, Andrew; Riese, Martin

    2018-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) instrument aboard the European Space Agency (ESA) Envisat satellite operated from July 2002 to April 2012. The infrared limb emission measurements provide a unique dataset of day and night observations of polar stratospheric clouds (PSCs) up to both poles. A recent classification method for PSC types in infrared (IR) limb spectra using spectral measurements in different atmospheric window regions has been applied to the complete mission period of MIPAS. The method uses a simple probabilistic classifier based on Bayes' theorem with a strong independence assumption on a combination of a well-established two-colour ratio method and multiple 2-D probability density functions of brightness temperature differences. The Bayesian classifier distinguishes between solid particles of ice, nitric acid trihydrate (NAT), and liquid droplets of supercooled ternary solution (STS), as well as mixed types. A climatology of MIPAS PSC occurrence and specific PSC classes has been compiled. Comparisons with results from the classification scheme of the spaceborne lidar Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol-Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite show excellent correspondence in the spatial and temporal evolution for the area of PSC coverage (APSC) even for each PSC class. Probability density functions of the PSC temperature, retrieved for each class with respect to equilibrium temperature of ice and based on coincident temperatures from meteorological reanalyses, are in accordance with the microphysical knowledge of the formation processes with respect to temperature for all three PSC types.This paper represents unprecedented pole-covering day- and nighttime climatology of the PSC distributions and their composition of different particle types. The dataset allows analyses on the temporal and spatial development of the PSC formation process over multiple winters. At first view, a more general comparison of APSC and AICE retrieved from the observations and from the existence temperature for NAT and ice particles based on the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis temperature data shows the high potential of the climatology for the validation and improvement of PSC schemes in chemical transport and chemistry-climate models.

  12. A comparison between univariate probabilistic and multivariate (logistic regression) methods for landslide susceptibility analysis: the example of the Febbraro valley (Northern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Apuani, T.; Felletti, F.

    2009-04-01

    The aim of this paper is to compare the results of two statistical methods for landslide susceptibility analysis: 1) univariate probabilistic method based on landslide susceptibility index, 2) multivariate method (logistic regression). The study area is the Febbraro valley, located in the central Italian Alps, where different types of metamorphic rocks croup out. On the eastern part of the studied basin a quaternary cover represented by colluvial and secondarily, by glacial deposits, is dominant. In this study 110 earth flows, mainly located toward NE portion of the catchment, were analyzed. They involve only the colluvial deposits and their extension mainly ranges from 36 to 3173 m2. Both statistical methods require to establish a spatial database, in which each landslide is described by several parameters that can be assigned using a main scarp central point of landslide. The spatial database is constructed using a Geographical Information System (GIS). Each landslide is described by several parameters corresponding to the value of main scarp central point of the landslide. Based on bibliographic review a total of 15 predisposing factors were utilized. The width of the intervals, in which the maps of the predisposing factors have to be reclassified, has been defined assuming constant intervals to: elevation (100 m), slope (5 °), solar radiation (0.1 MJ/cm2/year), profile curvature (1.2 1/m), tangential curvature (2.2 1/m), drainage density (0.5), lineament density (0.00126). For the other parameters have been used the results of the probability-probability plots analysis and the statistical indexes of landslides site. In particular slope length (0 ÷ 2, 2 ÷ 5, 5 ÷ 10, 10 ÷ 20, 20 ÷ 35, 35 ÷ 260), accumulation flow (0 ÷ 1, 1 ÷ 2, 2 ÷ 5, 5 ÷ 12, 12 ÷ 60, 60 ÷27265), Topographic Wetness Index 0 ÷ 0.74, 0.74 ÷ 1.94, 1.94 ÷ 2.62, 2.62 ÷ 3.48, 3.48 ÷ 6,00, 6.00 ÷ 9.44), Stream Power Index (0 ÷ 0.64, 0.64 ÷ 1.28, 1.28 ÷ 1.81, 1.81 ÷ 4.20, 4.20 ÷ 9.40). Geological map and land use map were also used, considering geological and land use properties as categorical variables. Appling the univariate probabilistic method the Landslide Susceptibility Index (LSI) is defined as the sum of the ratio Ra/Rb calculated for each predisposing factor, where Ra is the ratio between number of pixel of class and the total number of pixel of the study area, and Rb is the ratio between number of landslides respect to the pixel number of the interval area. From the analysis of the Ra/Rb ratio the relationship between landslide occurrence and predisposing factors were defined. Then the equation of LSI was used in GIS to trace the landslide susceptibility maps. The multivariate method for landslide susceptibility analysis, based on logistic regression, was performed starting from the density maps of the predisposing factors, calculated with the intervals defined above using the equation Rb/Rbtot, where Rbtot is a sum of all Rb values. Using stepwise forward algorithms the logistic regression was performed in two successive steps: first a univariate logistic regression is used to choose the most significant predisposing factors, then the multivariate logistic regression can be performed. The univariate regression highlighted the importance of the following factors: elevation, accumulation flow, drainage density, lineament density, geology and land use. When the multivariate regression was applied the number of controlling factors was reduced neglecting the geological properties. The resulting final susceptibility equation is: P = 1 / (1 + exp-(6.46-22.34*elevation-5.33*accumulation flow-7.99* drainage density-4.47*lineament density-17.31*land use)) and using this equation the susceptibility maps were obtained. To easy compare the results of the two methodologies, the susceptibility maps were reclassified in five susceptibility intervals (very high, high, moderate, low and very low) using natural breaks. Then the maps were validated using two cumulative distribution curves, one related to the landslides (number of landslides in each susceptibility class) and one to the basin (number of pixel covering each class). Comparing the curves for each method, it results that the two approaches (univariate and multivariate) are appropriate, providing acceptable results. In both maps the distribution of high susceptibility condition is mainly localized on the left slope of the catchment in agreement with the field evidences. The comparison between the methods was obtained by subtraction of the two maps. This operation shows that about 40% of the basin is classified by the same class of susceptibility. In general the univariate probabilistic method tends to overestimate the areal extension of the high susceptibility class with respect to the maps obtained by the logistic regression method.

  13. Site specific passive acoustic detection and densities of humpback whale calls off the coast of California

    NASA Astrophysics Data System (ADS)

    Helble, Tyler Adam

    Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.

  14. Partitioning loss rates of early juvenile blue crabs from seagrass habitats into mortality and emigration

    USGS Publications Warehouse

    Etherington, L.L.; Eggleston, D.B.; Stockhausen, W.T.

    2003-01-01

    Determining how post-settlement processes modify patterns of settlement is vital in understanding the spatial and temporal patterns of recruitment variability of species with open populations. Generally, either single components of post-settlement loss (mortality or emigration) are examined at a time, or else the total loss is examined without discrimination of mortality and emigration components. The role of mortality in the loss of early juvenile blue crabs, Callinectes sapidus, has been addressed in a few studies; however, the relative contribution of emigration has received little attention. We conducted mark-recapture experiments to examine the relative contribution of mortality and emigration to total loss rates of early juvenile blue crabs from seagrass habitats. Loss was partitioned into emigration and mortality components using a modified version of Jackson's (1939) square-within-a-square method. The field experiments assessed the effects of two size classes of early instars (J1-J2, J3-J5), two densities of juveniles (low: 16 m-2, high: 64 m-2), and time of day (day, night) on loss rates. In general, total loss rates of experimental juveniles and colonization rates by unmarked juveniles were extremely high (range = 10-57 crabs m-2/6 h and 17-51 crabs m-2/6 h, for loss and colonization, respectively). Total loss rates were higher at night than during the day, suggesting that juveniles (or potentially their predators) exhibit increased nocturnal activity. While colonization rates did not differ by time of day, J3-J5 juveniles demonstrated higher rates of colonization than J1-J2 crabs. Overall, there was high variability in both mortality and emigration, particularly for emigration. Average probabilities of mortality across all treatment combinations ranged from 0.25-0.67/6 h, while probabilities of emigration ranged from 0.29-0.72/6 h. Although mean mortality rates were greater than emigration rates in most treatments, the proportion of experimental trials in which crab loss from seagrass due to mortality was greater than losses due to emigration was not significantly different from 50%. Thus, mortality and emigration appear to contribute equally to juvenile loss in seagrass habitats. The difference in magnitude (absolute amount of loss) between mean emigration and mean mortality varied between size classes, such that differences between emigration and mortality were relatively small for J1-J2 crabs, but much larger for J3-J5 crabs. Further, mortality rates were density-dependent for J3-J5 juvenile stages but not for J1-J2 crabs, whereas emigration was inversely density-dependent among J3-J5 stages but not for J1-J2 instars. The co-dependency of mortality and emigration suggests that the loss term (emigration or mortality) which has the relatively stronger contribution to total loss may dictate the patterns of loss under different conditions. For older juveniles (J3-J5), emigration may only have a large impact on juvenile loss where densities are low, since the contribution of mortality appears to be much greater than emigration at high densities. The size-specific pattern of density-dependent mortality supports the notion of an ontogenetic habitat shift by early juvenile blue crabs from seagrass to unvegetated habitats, since larger individuals may experience increased mortality at high densities within seagrass beds. Qualitative comparisons between this study and a concurrent study of planktonic emigration of J1-J5 blue crabs (Blackmon and Eggleston, 2001) suggests that benthic emigration among J1-J2 blue crabs was greater than planktonic emigration; for J3-J5 stages benthic and planktonic emigration were nearly equal. This study demonstrates the potentially large role of emigration in recruitment processes and patterns of early juvenile blue crabs, and illustrates how juvenile size, juvenile density, and time of day can affect mortality and emigration rates as well as total loss and colonization. The components of po

  15. Generalized quantum theory of recollapsing homogeneous cosmologies

    NASA Astrophysics Data System (ADS)

    Craig, David; Hartle, James B.

    2004-06-01

    A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.

  16. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  17. Stochastic dynamics in a two-dimensional oscillator near a saddle-node bifurcation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inchiosa, M. E.; In, V.; Bulsara, A. R.

    We study the oscillator equations describing a particular class of nonlinear amplifier, exemplified in this work by a two-junction superconducting quantum interference device. This class of dynamic system is described by a potential energy function that can admit minima (corresponding to stable solutions of the dynamic equations), or {open_quotes}running states{close_quotes} wherein the system is biased so that the potential minima disappear and the solutions display spontaneous oscillations. Just beyond the onset of the spontaneous oscillations, the system is known to show significantly enhanced sensitivity to very weak magnetic signals. The global phase space structure allows us to apply a centermore » manifold technique to approximate analytically the oscillatory behavior just past the (saddle-node) bifurcation and compute the oscillation period, which obeys standard scaling laws. In this regime, the dynamics can be represented by an {open_quotes}integrate-fire{close_quotes} model drawn from the computational neuroscience repertoire; in fact, we obtain an {open_quotes}interspike interval{close_quotes} probability density function and an associated power spectral density (computed via Renewal theory) that agree very well with the results obtained via numerical simulations. Notably, driving the system with one or more time sinusoids produces a noise-lowering injection locking effect and/or heterodyning.« less

  18. The use of space and high altitude aerial photography to classify forest land and to detect forest disturbances

    NASA Technical Reports Server (NTRS)

    Aldrich, R. C.; Greentree, W. J.; Heller, R. C.; Norick, N. X.

    1970-01-01

    In October 1969, an investigation was begun near Atlanta, Georgia, to explore the possibilities of developing predictors for forest land and stand condition classifications using space photography. It has been found that forest area can be predicted with reasonable accuracy on space photographs using ocular techniques. Infrared color film is the best single multiband sensor for this purpose. Using the Apollo 9 infrared color photographs taken in March 1969 photointerpreters were able to predict forest area for small units consistently within 5 to 10 percent of ground truth. Approximately 5,000 density data points were recorded for 14 scan lines selected at random from five study blocks. The mean densities and standard deviations were computed for 13 separate land use classes. The results indicate that forest area cannot be separated from other land uses with a high degree of accuracy using optical film density alone. If, however, densities derived by introducing red, green, and blue cutoff filters in the optical system of the microdensitometer are combined with their differences and their ratios in regression analysis techniques, there is a good possibility of discriminating forest from all other classes.

  19. Wind-tunnel evaluation of an advanced main-rotor blade design for a utility-class helicopter

    NASA Technical Reports Server (NTRS)

    Yeager, William T., Jr.; Mantay, Wayne R.; Wilbur, Matthew L.; Cramer, Robert G., Jr.; Singleton, Jeffrey D.

    1987-01-01

    An investigation was conducted in the Langley Transonic Dynamics Tunnel to evaluate differences between an existing utility-class main-rotor blade and an advanced-design main-rotor blade. The two rotor blade designs were compared with regard to rotor performance oscillatory pitch-link loads, and 4-per-rev vertical fixed-system loads. Tests were conducted in hover and over a range of simulated full-scale gross weights and density altitude conditions at advance ratios from 0.15 to 0.40. Results indicate that the advanced blade design offers performance improvements over the baseline blade in both hover and forward flight. Pitch-link oscillatory loads for the baseline rotor were more sensitive to the test conditions than those of the advanced rotor. The 4-per-rev vertical fixed-system load produced by the advanced blade was larger than that produced by the baseline blade at all test conditions.

  20. Short-term droughts forecast using Markov chain model in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2017-07-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  1. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  2. A structured population model with diffusion in structure space.

    PubMed

    Pugliese, Andrea; Milner, Fabio

    2018-05-09

    A structured population model is described and analyzed, in which individual dynamics is stochastic. The model consists of a PDE of advection-diffusion type in the structure variable. The population may represent, for example, the density of infected individuals structured by pathogen density x, [Formula: see text]. The individuals with density [Formula: see text] are not infected, but rather susceptible or recovered. Their dynamics is described by an ODE with a source term that is the exact flux from the diffusion and advection as [Formula: see text]. Infection/reinfection is then modeled moving a fraction of these individuals into the infected class by distributing them in the structure variable through a probability density function. Existence of a global-in-time solution is proven, as well as a classical bifurcation result about equilibrium solutions: a net reproduction number [Formula: see text] is defined that separates the case of only the trivial equilibrium existing when [Formula: see text] from the existence of another-nontrivial-equilibrium when [Formula: see text]. Numerical simulation results are provided to show the stabilization towards the positive equilibrium when [Formula: see text] and towards the trivial one when [Formula: see text], result that is not proven analytically. Simulations are also provided to show the Allee effect that helps boost population sizes at low densities.

  3. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  4. Generation of multivariate near shore extreme wave conditions based on an extreme value copula for offshore boundary conditions.

    NASA Astrophysics Data System (ADS)

    Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris

    2013-04-01

    Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.

  5. Latent typologies of posttraumatic stress disorder in World Trade Center responders.

    PubMed

    Horn, Sarah R; Pietrzak, Robert H; Schechter, Clyde; Bromet, Evelyn J; Katz, Craig L; Reissman, Dori B; Kotov, Roman; Crane, Michael; Harrison, Denise J; Herbert, Robin; Luft, Benjamin J; Moline, Jacqueline M; Stellman, Jeanne M; Udasin, Iris G; Landrigan, Philip J; Zvolensky, Michael J; Southwick, Steven M; Feder, Adriana

    2016-12-01

    Posttraumatic stress disorder (PTSD) is a debilitating and often chronic psychiatric disorder. Following the 9/11/2001 World Trade Center (WTC) attacks, thousands of individuals were involved in rescue, recovery and clean-up efforts. While a growing body of literature has documented the prevalence and correlates of PTSD in WTC responders, no study has evaluated predominant typologies of PTSD in this population. Participants were 4352 WTC responders with probable WTC-related DSM-IV PTSD. Latent class analyses were conducted to identify predominant typologies of PTSD symptoms and associated correlates. A 3-class solution provided the optimal representation of latent PTSD symptom typologies. The first class, labeled "High-Symptom (n = 1,973, 45.3%)," was characterized by high probabilities of all PTSD symptoms. The second class, "Dysphoric (n = 1,371, 31.5%)," exhibited relatively high probabilities of emotional numbing and dysphoric arousal (e.g., sleep disturbance). The third class, "Threat (n = 1,008, 23.2%)," was characterized by high probabilities of re-experiencing, avoidance and anxious arousal (e.g., hypervigilance). Compared to the Threat class, the Dysphoric class reported a greater number of life stressors after 9/11/2001 (OR = 1.06). The High-Symptom class was more likely than the Threat class to have a positive psychiatric history before 9/11/2001 (OR = 1.7) and reported a greater number of life stressors after 9/11/2001 (OR = 1.1). The High-Symptom class was more likely than the Dysphoric class, which was more likely than the Threat class, to screen positive for depression (83% > 74% > 53%, respectively), and to report greater functional impairment (High-Symptom > Dysphoric [Cohen d = 0.19], Dysphoric > Threat [Cohen d = 0.24]). These results may help inform assessment, risk stratification, and treatment approaches for PTSD in WTC and disaster responders. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Forcasting Shortleaf Pine Seed Crops in the Ouachita Mountains

    Treesearch

    Michael G. Shelton; Robert F. Wittwer

    2004-01-01

    We field tested a cone-rating system to forecast seed crops from 1993 to 1996 in 28 shortleaf pine (Pinus echinata Mill.) stands, which represented a wide range of stand conditions. Sample trees were visually assigned to one of three cone-density classes based on cone spacing, occurrence of cones in clusters, and distribution of cones within the...

  7. Reversing Period-Doubling Bifurcations in Models of Population Interactions Using Constant Stocking or Harvesting

    Treesearch

    James F. Selgrade; James H. Roberds

    1998-01-01

    This study considers a general class of two-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Conditions are derived...

  8. The Relationship between Housing and Children's Literacy Achievement: Implications for Supporting Vulnerable Children

    ERIC Educational Resources Information Center

    Pillay, Jace

    2017-01-01

    This article examines the relationship between housing, a critical aspect of socio-economic conditions, and literacy achievement of children from a school in a high density suburb in South Africa. Data was collected through a quantitative survey that was administered to learners (N = 160) from four Grade Five classes. The survey included five…

  9. Scale-invariant puddles in graphene: Geometric properties of electron-hole distribution at the Dirac point.

    PubMed

    Najafi, M N; Nezhadhaghighi, M Ghasemi

    2017-03-01

    We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.

  10. Design rules for quasi-linear nonlinear optical structures

    NASA Astrophysics Data System (ADS)

    Lytel, Richard; Mossman, Sean M.; Kuzyk, Mark G.

    2015-09-01

    The maximization of the intrinsic optical nonlinearities of quantum structures for ultrafast applications requires a spectrum scaling as the square of the energy eigenstate number or faster. This is a necessary condition for an intrinsic response approaching the fundamental limits. A second condition is a design generating eigenstates whose ground and lowest excited state probability densities are spatially separated to produce large differences in dipole moments while maintaining a reasonable spatial overlap to produce large off-diagonal transition moments. A structure whose design meets both conditions will necessarily have large first or second hyperpolarizabilities. These two conditions are fundamental heuristics for the design of any nonlinear optical structure.

  11. Many-body calculations of low energy eigenstates in magnetic and periodic systems with self healing diffusion Monte Carlo: steps beyond the fixed-phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less

  12. Impulse Control and Callous-Unemotional Traits Distinguish Patterns of Delinquency and Substance Use in Justice Involved Adolescents: Examining the Moderating Role of Neighborhood Context.

    PubMed

    Ray, James V; Thornton, Laura C; Frick, Paul J; Steinberg, Laurence; Cauffman, Elizabeth

    2016-04-01

    Both callous-unemotional (CU) traits and impulse control are known risk factors associated with delinquency and substance use. However, research is limited in how contextual factors such as neighborhood conditions influence the associations between these two dispositional factors and these two externalizing behaviors. The current study utilized latent class analysis (LCA) to identify unique classes of delinquency and substance use within an ethnically diverse sample (n = 1216) of justice-involved adolescents (ages 13 to 17) from three different sites. Neighborhood disorder, CU traits, and impulse control were all independently associated with membership in classes with more extensive histories of delinquency and substance use. The effects of CU traits and impulse control in distinguishing delinquent classes was invariant across levels of neighborhood disorder, whereas neighborhood disorder moderated the association between impulse control and substance use. Specifically, the probability of being in more severe substance using classes for those low in impulse control was stronger in neighborhoods with fewer indicators of social and physical disorder.

  13. Variation of fan tone steadiness for several inflow conditions

    NASA Technical Reports Server (NTRS)

    Balombin, J. R.

    1978-01-01

    An amplitude probability density function analysis technique for quantifying the degree of fan noise tone steadiness has been applied to data from a fan tested under a variety of inflow conditions. The test conditions included typical static operation, inflow control by a honeycomb/screen device and forward velocity in a wind tunnel simulating flight. The ratio of mean square sinusoidal-to-random signal content in the fundamental and second harmonic tones was found to vary by more than an order-of-magnitude. Some implications of these results concerning the nature of fan noise generation mechanisms are discussed.

  14. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  15. Probability interpretations of intraclass reliabilities.

    PubMed

    Ellis, Jules L

    2013-11-20

    Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model

    PubMed Central

    Martins Carvalho, Fernando; de Araújo, Tânia Maria

    2016-01-01

    Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999

  17. Polymeric CO: A new class of High Energy Density Material

    NASA Astrophysics Data System (ADS)

    Lipp, Magnus

    2005-03-01

    Covalently bonded extended phases of molecular solids made of first- and second-row elements at high pressures are a new class of material with advanced optical, mechanical and energetic properties. The existence of such extended solids has recently been demonstrated using diamond anvil cells in several systems, including N2, CO2, and CO. However, the microscopic quantities produced at the formidable high-pressure/temperature conditions have limited the characterization of their predicted novel properties including high-energy content. Here we present the first experimental evidence that these extended low-Z solids are indeed high energy density materials via milligram-scale high-pressure synthesis, recovery and characterization of polymeric CO (p-CO). This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  18. Leonid's Particle Analyses from Stratospheric Balloon Collection on Xerogel Surfaces

    NASA Technical Reports Server (NTRS)

    Noever, David; Phillips, Tony; Horack, John; Porter, Linda; Myszka, Ed

    1999-01-01

    Recovered from a stratospheric balloon above 20 km on 17-18 November 1998, at least eight candidate microparticles were collected and analyzed from low-density silica xerogel collection plates. Capture time at Leonids' storm peak was validated locally along the balloon trajectory by direct video imaging of meteor fluence up to 24/hr above 98% of the Earth's atmosphere. At least one 30 micron particle agrees morphologically to a smooth, unmelted spherule and compares most closely in non-volatile elemental ratios (Mg/Si, Al/Si, and Fe/Si) to compositional data in surface/ocean meteorite collections. A Euclidean tree diagram based on composition makes a most probable identification as a non-porous stratospherically collected particle and a least probable identification as terrestrial matter or an ordinary chondrite. If of extraterrestrial origin, the mineralogical class would be consistent with a stony (S) type of silicate, olivine [(Mg,Fe)2SiO4] and pyroxene [(Mg, Fe)Si!O3)--or oxides, herecynite [(Fe,Mg) Al2O4].

  19. The job content questionnaire in various occupational contexts: applying a latent class model.

    PubMed

    Santos, Kionna Oliveira Bernardes; Araújo, Tânia Maria de; Carvalho, Fernando Martins; Karasek, Robert

    2017-05-17

    To evaluate Job Content Questionnaire(JCQ) performance using the latent class model. We analysed cross-sectional studies conducted in Brazil and examined three occupational categories: petroleum industry workers (n=489), teachers (n=4392) and primary healthcare workers (3078)and 1552 urban workers from a representative sample of the city of Feira de Santana in Bahia, Brazil. An appropriate number of latent classes was extracted and described each occupational category using latent class analysis, a multivariate method that evaluates constructs and takes into accountthe latent characteristics underlying the structure of measurement scales. The conditional probabilities of workers belonging to each class were then analysed graphically. Initially, the latent class analysis extracted four classes corresponding to the four job types (active, passive, low strain and high strain) proposed by the Job-Strain model (JSM) and operationalised by the JCQ. However, after taking into consideration the adequacy criteria to evaluate the number of extracted classes, three classes (active, low strain and high strain) were extracted from the studies of urban workers and teachers and four classes (active, passive, low strain and high strain) from the study of primary healthcare and petroleum industry workers. The four job types proposed by the JSM were identified among primary healthcare and petroleum industry workers-groups with relatively high levels of skill discretion and decision authority. Three job types were identified for teachers and urban workers; however, passive job situations were not found within these groups. The latent class analysis enabled us to describe the conditional standard responses of the job types proposed by the model, particularly in relation to active jobs and high and low strain situations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Nuclear binding of progesterone in hen oviduct. Binding to multiple sites in vitro.

    PubMed Central

    Pikler, G M; Webster, R A; Spelsberg, T C

    1976-01-01

    Steroid hormones, including progesterone, are known to bind with high affinity (Kd approximately 1x10(-10)M) to receptor proteins once they enter target cells. This complex (the progesterone-receptor) then undergoes a temperature-and/or salt-dependent activation which allows it to migrate to the cell nucleus and to bind to the deoxyribonucleoproteins. The present studies demonstrate that binding the hormone-receptor complex in vitro to isolated nuclei from the oviducts of laying hens required the same conditions as do other studies of bbinding in vitro reported previously, e.g. the hormone must be complexed to intact and activated receptor. The assay of the nuclear binding by using multiple concentrations of progesterone receptor reveals the presence of more than one class of binding site in the oviduct nuclei. The affinity of each of these classes of binding sites range from Kd approximately 1x10(-9)-1x10(-8)M. Assays using free steroid (not complexed with receptor) show no binding to these sites. The binding to each of the classes of sites, displays a differential stability to increasing ionic concentrations, suggesting primarily an ionic-type interaction for all classes. Only the highest-affinity class of binding site is capable of binding progesterone receptor under physioligical-saline conditions. This class represent 6000-10000 sites per cell nucleus and resembles the sites detected in vivo (Spelsberg, 1976, Biochem. J. 156, 391-398) which cause maximal transcriptional response when saturated with the progesterone receptor. The multiple binding sites for the progesterone receptor either are not present or are found in limited numbers in the nuclei of non-target organs. Differences in extent of binding to the nuclear material between a target tissue (oviduct) and other tissues (spleen or erythrocyte) are markedly dependent on the ionic conditions, and are probably due to binding to different classes of sites in the nuclei. PMID:182147

  1. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  2. Hydrodynamic Model for Density Gradients Instability in Hall Plasmas Thrusters

    NASA Astrophysics Data System (ADS)

    Singh, Sukhmander

    2017-10-01

    There is an increasing interest for a correct understanding of purely growing electromagnetic and electrostatic instabilities driven by a plasma gradient in a Hall thruster devices. In Hall thrusters, which are typically operated with xenon, the thrust is provided by the acceleration of ions in the plasma generated in a discharge chamber. The goal of this paper is to study the instabilities due to gradients of plasma density and conditions for the growth rate and real part of the frequency for Hall thruster plasmas. Inhomogeneous plasmas prone a wide class of eigen modes induced by inhomogeneities of plasma density and called drift waves and instabilities. The growth rate of the instability has a dependences on the magnetic field, plasma density, ion temperature and wave numbers and initial drift velocities of the plasma species.

  3. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  4. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  5. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  6. Small and large wetland fragments are equally suited breeding sites for a ground-nesting passerine.

    PubMed

    Pasinelli, Gilberto; Mayer, Christian; Gouskov, Alexandre; Schiegg, Karin

    2008-06-01

    Large habitat fragments are generally thought to host more species and to offer more diverse and/or better quality habitats than small fragments. However, the importance of small fragments for population dynamics in general and for reproductive performance in particular is highly controversial. Using an information-theoretic approach, we examined reproductive performance and probability of local recruitment of color-banded reed buntings Emberiza schoeniclus in relation to the size of 18 wetland fragments in northeastern Switzerland over 4 years. We also investigated if reproductive performance and recruitment probability were density-dependent. None of the four measures of reproductive performance (laying date, nest failure probability, fledgling production per territory, fledgling condition) nor recruitment probability were found to be related to wetland fragment size. In terms of fledgling production, however, fragment size interacted with year, indicating that small fragments were better reproductive grounds in some years than large fragments. Reproductive performance and recruitment probability were not density-dependent. Our results suggest that small fragments are equally suited as breeding grounds for the reed bunting as large fragments and should therefore be managed to provide a habitat for this and other specialists occurring in the same habitat. Moreover, large fragments may represent sinks in specific years because a substantial percentage of all breeding pairs in our study area breed in large fragments, and reproductive failure in these fragments due to the regularly occurring floods may have a much stronger impact on regional population dynamics than comparable events in small fragments.

  7. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  8. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  9. Evaluating factors driving population densities of mayfly nymphs in Western Lake Erie

    USGS Publications Warehouse

    Stapanian, Martin A.; Kocovsky, Patrick; Bodamer Scarbro, Betsy L.

    2017-01-01

    Mayfly (Hexagenia spp.) nymphs have been widely used as indicators of water and substrate quality in lakes. Thermal stratification and the subsequent formation of benthic hypoxia may result in nymph mortality. Our goal was to identify potential associations between recent increases in temperature and eutrophication, which exacerbate hypoxic events in lakes, and mayfly populations in Lake Erie. Nymphs were collected during April–May 1999–2014. We used wind and temperature data to calculate four measures of thermal stratification, which drives hypoxic events, during summers of 1998–2013. Bottom trawl data collected during August 1998–2013 were used to estimate annual biomass of fishes known to be predators of mayfly nymphs. We used Akaike's Information Criterion to identify the best one- and two-predictor regression models of annual population densities (N/m2) of age-1 and age-2 nymphs, in which candidate predictors included the four measures of stratification, predator fish biomass, competition, and population densities of age-2 (for age-1) and age-1 (for age-2) nymphs from the previous year. Densities of both age classes of nymphs declined over the time series. Population densities of age-1 and age-2 nymphs from the previous year best predicted annual population densities of nymphs of both age classes. However, hypoxic conditions (indicated by stratification) and predation both had negative effects on annual population density of mayflies. Compared with predation, hypoxia had an inconsistent effect on annual nymph density. The increases in temperature and eutrophication in Lake Erie, which exacerbate hypoxic events, may have drastic effects on the mayfly populations.

  10. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  11. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  12. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  13. Semantic Labelling of Ultra Dense Mls Point Clouds in Urban Road Corridors Based on Fusing Crf with Shape Priors

    NASA Astrophysics Data System (ADS)

    Yao, W.; Polewski, P.; Krzystek, P.

    2017-09-01

    In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.

  14. Stereo photo series for quantifying natural fuels. Volume XII: Post-hurricane fuels in forests of the Southeast United States.

    Treesearch

    Robert E. Vihnanek; Cameron S. Balog; Clinton S. Wright; Roger D. Ottmar; Jeffrey W. Kelly

    2009-01-01

    Two series of single and stereo photographs display a range of natural conditions and fuel loadings in post-hurricane forests in the southeastern United States. Each group of photos includes inventory information summarizing vegetation composition, structure and loading, woody material loading and density by size class, forest floor loading, and various site...

  15. Stereo photo series for quantifying natural fuels Volume IX: oak/juniper in southern Arizona and New Mexico.

    Treesearch

    Roger D. Ottmar; Robert E. Vihnanek; Clinton S. Wright; Geoffrey B. Seymour

    2007-01-01

    A series of single and stereo photographs display a range of natural conditions and fuel loadings in evergreen and deciduous oak/juniper woodland and savannah ecosystems in southern Arizona and New Mexico. This group of photos includes inventory data summarizing vegetation composition, structure, and loading; woody material loading and density by size class; forest...

  16. Stocking levels and underlying assumptions for uneven-aged Ponderosa Pine stands.

    Treesearch

    P.H. Cochran

    1992-01-01

    Potential Problems With Q-Values Many ponderosa pine stands have a limited number of size classes, and it may be desirable to carry very large trees through several cutting cycles. Large numbers of trees below commercial size are not needed to provide adequate numbers of future replacement trees. Under these conditions, application of stand density index (SDI) can have...

  17. Diffusion Monte Carlo calculations of Xenon and Krypton at High Pressure

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Mattsson, Thomas R.

    2011-06-01

    Ab initio calculations based on density functional theory (DFT) have proven a valuable tool in understanding the properties of materials at extreme conditions. However, there are entire classes of materials where the current limitations of DFT cast doubt upon the predictive power of the method. These include so called strongly correlated systems and materials where van der Waals forces are important. Diffusion Monte Carlo (DMC) can treat materials with a different class of approximations that have generally proven to be more accurate. The use of DMC together with DFT may therefore improve the predictive capability of the ab initio calculation of materials at extreme conditions. We present two examples of this approach. In the first we use DMC total energies to address the discrepancy between DFT and diamond anvil cell melt curves of Xe. In the second, DMC is used to address the choice of density functional used in calculations of the Kr hugoniot. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000. Belonoshko et al. PRB 74, 054114 (2006).

  18. Influence of watershed topographic and socio-economic attributes on the climate sensitivity of global river water quality

    NASA Astrophysics Data System (ADS)

    Khan, Afed U.; Jiang, Jiping; Wang, Peng; Zheng, Yi

    2017-10-01

    Surface waters exhibit regionalization due to various climatic conditions and anthropogenic activities. Here we assess the impact of topographic and socio-economic factors on the climate sensitivity of surface water quality, estimated using an elasticity approach (climate elasticity of water quality (CEWQ)), and identify potential risks of instability in different regions and climatic conditions. Large global datasets were used for 12 main water quality parameters from 43 water quality monitoring stations located at large major rivers. The results demonstrated that precipitation elasticity shows higher sensitivity to topographic and socio-economic determinants as compared to temperature elasticity. In tropical climate class (A), gross domestic product (GDP) played an important role in stabilizing the CEWQ. In temperate climate class (C), GDP played the same role in stability, while the runoff coefficient, slope, and population density fuelled the risk of instability. The results implied that watersheds with lower runoff coefficient, thick population density, over fertilization and manure application face a higher risk of instability. We discuss the socio-economic and topographic factors that cause instability of CEWQ parameters and conclude with some suggestions for watershed managers to bring sustainability in freshwater bodies.

  19. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  20. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  1. Quantitative assessment of mineral resources with an application to petroleum geology

    USGS Publications Warehouse

    Harff, Jan; Davis, J.C.; Olea, R.A.

    1992-01-01

    The probability of occurrence of natural resources, such as petroleum deposits, can be assessed by a combination of multivariate statistical and geostatistical techniques. The area of study is partitioned into regions that are as homogeneous as possible internally while simultaneously as distinct as possible. Fisher's discriminant criterion is used to select geological variables that best distinguish productive from nonproductive localities, based on a sample of previously drilled exploratory wells. On the basis of these geological variables, each wildcat well is assigned to the production class (dry or producer in the two-class case) for which the Mahalanobis' distance from the observation to the class centroid is a minimum. Universal kriging is used to interpolate values of the Mahalanobis' distances to all locations not yet drilled. The probability that an undrilled locality belongs to the productive class can be found, using the kriging estimation variances to assess the probability of misclassification. Finally, Bayes' relationship can be used to determine the probability that an undrilled location will be a discovery, regardless of the production class in which it is placed. The method is illustrated with a study of oil prospects in the Lansing/Kansas City interval of western Kansas, using geological variables derived from well logs. ?? 1992 Oxford University Press.

  2. Fokker-Planck description of conductance-based integrate-and-fire neuronal networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovacic, Gregor; Tao, Louis; Rangan, Aaditya V.

    2009-08-15

    Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.

  3. Dynamics of Markets

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2009-09-01

    Preface; 1. Econophysics: why and what; 2. Neo-classical economic theory; 3. Probability and stochastic processes; 4. Introduction to financial economics; 5. Introduction to portfolio selection theory; 6. Scaling, pair correlations, and conditional densities; 7. Statistical ensembles: deducing dynamics from time series; 8. Martingale option pricing; 9. FX market globalization: evolution of the dollar to worldwide reserve currency; 10. Macroeconomics and econometrics: regression models vs. empirically based modeling; 11. Complexity; Index.

  4. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  5. Spatial analysis of macro-level bicycle crashes using the class of conditional autoregressive models.

    PubMed

    Saha, Dibakar; Alluri, Priyanka; Gan, Albert; Wu, Wanyang

    2018-02-21

    The objective of this study was to investigate the relationship between bicycle crash frequency and their contributing factors at the census block group level in Florida, USA. Crashes aggregated over the census block groups tend to be clustered (i.e., spatially dependent) rather than randomly distributed. To account for the effect of spatial dependence across the census block groups, the class of conditional autoregressive (CAR) models were employed within the hierarchical Bayesian framework. Based on four years (2011-2014) of crash data, total and fatal-and-severe injury bicycle crash frequencies were modeled as a function of a large number of variables representing demographic and socio-economic characteristics, roadway infrastructure and traffic characteristics, and bicycle activity characteristics. This study explored and compared the performance of two CAR models, namely the Besag's model and the Leroux's model, in crash prediction. The Besag's models, which differ from the Leroux's models by the structure of how spatial autocorrelation are specified in the models, were found to fit the data better. A 95% Bayesian credible interval was selected to identify the variables that had credible impact on bicycle crashes. A total of 21 variables were found to be credible in the total crash model, while 18 variables were found to be credible in the fatal-and-severe injury crash model. Population, daily vehicle miles traveled, age cohorts, household automobile ownership, density of urban roads by functional class, bicycle trip miles, and bicycle trip intensity had positive effects in both the total and fatal-and-severe crash models. Educational attainment variables, truck percentage, and density of rural roads by functional class were found to be negatively associated with both total and fatal-and-severe bicycle crash frequencies. Published by Elsevier Ltd.

  6. Characterizing the performance of XOR games and the Shannon capacity of graphs.

    PubMed

    Ramanathan, Ravishankar; Kay, Alastair; Murta, Gláucia; Horodecki, Paweł

    2014-12-12

    In this Letter we give a set of necessary and sufficient conditions such that quantum players of a two-party XOR game cannot perform any better than classical players. With any such game, we associate a graph and examine its zero-error communication capacity. This allows us to specify a broad new class of graphs for which the Shannon capacity can be calculated. The conditions also enable the parametrization of new families of games that have no quantum advantage for arbitrary input probability distributions, up to certain symmetries. In the future, these might be used in information-theoretic studies on reproducing the set of quantum nonlocal correlations.

  7. Clustering the Orion B giant molecular cloud based on its molecular emission.

    PubMed

    Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal

    2018-02-01

    Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. A clustering analysis based only on the J = 1 - 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes ( n H ~ 100 cm -3 , ~ 500 cm -3 , and > 1000 cm -3 ), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 - 0 line of HCO + and the N = 1 - 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO + and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO + intensity ratio in UV-illuminated regions. Finer distinctions in density classes ( n H ~ 7 × 10 3 cm -3 ~ 4 × 10 4 cm -3 ) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO + (1 - 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers.

  8. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Jeff; Levesque, Emily M., E-mail: emsque@uw.edu

    We have identified the H α absorption feature as a new spectroscopic diagnostic of luminosity class in K- and M-type stars. From high-resolution spectra of 19 stars with well-determined physical properties (including effective temperatures and stellar radii), we measured equivalent widths for H α and the Ca ii triplet and examined their dependence on both luminosity class and stellar radius. H α shows a strong relation with both luminosity class and radius that extends down to late M spectral types. This behavior in H α has been predicted as a result of the density-dependent overpopulation of the metastable 2s levelmore » in hydrogen, an effect that should become dominant for Balmer line formation in non-LTE conditions. We conclude that this new metallicity-insensitive diagnostic of luminosity class in cool stars could serve as an effective means of discerning between populations such as Milky Way giants and supergiant members of background galaxies.« less

  10. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  11. Weak Measurement and Quantum Smoothing of a Superconducting Qubit

    NASA Astrophysics Data System (ADS)

    Tan, Dian

    In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.

  12. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  13. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  14. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  15. Self-imposed timeouts under increasing response requirements.

    NASA Technical Reports Server (NTRS)

    Dardano, J. F.

    1973-01-01

    Three male White Carneaux pigeons were used in the investigation. None of the results obtained contradicts the interpretation of self-imposed timeouts as an escape response reinforced by the removal of unfavorable reinforcement conditions, although some details of the performances reflect either a weak control and/or operation of other controlling variables. Timeout key responding can be considered as one of several classes of behavior having a low probability of occurrence, all of which compete with the behavior maintained by positive reinforcement schedule.

  16. Ensemble learning and model averaging for material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  17. Relation Between Firing Statistics of Spiking Neuron with Delayed Fast Inhibitory Feedback and Without Feedback

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander; Shchur, Olha

    We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived. Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.

  18. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  19. Epidemiology of school accidents during a six school-year period in one region in Poland.

    PubMed

    Sosnowska, Stefania; Kostka, Tomasz

    2003-01-01

    The aim of the study was to analyse the incidence of school accidents in relation to school size, urban/rural environment and conditions of physical education classes. 202 primary schools with nearly 50,000 students aged 7-15 years were studied during a 6-year period in the Włocławek region in Poland. There were in total 3274 school accidents per 293,000 student-years. Accidents during breaks (36.6%) and physical education (33.2%) were most common. Most frequently accidents took place at schoolyard (29.7%), gymnasium (20.2%), and in the corridor and stairs (25.2%). After adjustment for students' age and sex, student-staff ratio and duration of school hours, urban environment increased the probability of accident (OR: 1.25; 95% CI: 1.14-1.38). Middle-size schools (8-23 classes) had similar accident rate as small schools (OR: 0.93; 95% CI: 0.83-1.04), while schools with 24-32 classes (OR: 1.26; 95% CI: 1.10-1.43) and with > or = 33 classes (OR: 1.36; 95% CI: 1.17-1.58) had increased accident rate. Presence of a gymnasium was also associated with increased probability of accident (OR: 1.49; 95% CI: 1.38-1.61). Urban environment, larger school-size and equipment with full-size gymnasium are important and independent risk factors for school accidents. These findings provide some new insights into the epidemiology of school-related accidents and may be useful information for the planning of strategies to reduce accident incidence in schools.

  20. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  1. Avian use of Sheyenne Lake and associated habitats in central North Dakota

    USGS Publications Warehouse

    Faanes, Craig A.

    1982-01-01

    A study of avian use of various habitats was conducted in the Sheyenne Lake region of central North Dakota during April-June 1980. Population counts of birds were made in wetlands of various classes, prairie thickets, upland native prairie, shelterbelts, and cropland. About 22,000 breeding bird pairs including 92 species that nested occupied the area. Population means for most species were equal to or greater than statewide means. Red-winged blackbird (Agelaius phoeniceus), yellow-headed blackbird (Xanthocephalus xanthocephalus), mourning dove (Zenaida macroura), and blue-winged teal (Anas discors) were the most numerous species, and made up 32.9% of the total population . Highest densities of breeding birds occurred in shelterbelts, semipermanent wetlands, and prairie thickets. Lowest densities occurred in upland native prairie and cropland. The study area was used by 49.6% of the total avifauna of the State, and 51% of the breeding avifauna of North Dakota probably nested in the study area. The diversity of birds using the area was unusual in that such a large number of species occupied a relatively small area. The close interspersion of many native habitats, several of which are unique in North Dakota, probably accounted for this diversity. Data on dates of occurrence, nesting records, and habitat use are presented for the 175 species recorded in 1980. Observations of significance by refuge staff are also provided.

  2. How likely are constituent quanta to initiate inflation?

    DOE PAGES

    Berezhiani, Lasha; Trodden, Mark

    2015-08-06

    In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less

  3. Mollusks of Manuel Antonio National Park, Pacific Costa Rica.

    PubMed

    Willis, S; Cortés, J

    2001-12-01

    The mollusks in Manuel Antonio National Park on the central section of the Pacific coast of Costa Rica were studied along thirty-six transects done perpendicular to the shore, and by random sampling of subtidal environments, beaches and mangrove forest. Seventy-four species of mollusks belonging to three classes and 40 families were found: 63 gastropods, 9 bivalves and 2 chitons, during this study in 1995. Of these, 16 species were found only as empty shells (11) or inhabited by hermit crabs (5). Forty-eight species were found at only one locality. Half the species were found at one site, Puerto Escondido. The most diverse habitat was the low rocky intertidal zone. Nodilittorina modesta was present in 34 transects and Nerita scabricosta in 30. Nodilittorina aspera had the highest density of mollusks in the transects. Only four transects did not clustered into the four main groups. The species composition of one cluster of transects is associated with a boulder substrate, while another cluster of transects associates with site. Two clusters were not associated to any of the factors recorded. Some species were present in previous studies but absent in 1995, while others were absent in the previous studies but found in 1995. For example, Siphonaria gigas was present in 1995 in many transects with a relatively high density, but absent in 1962, probably due to human predation before the establishment of the park. Including this study, a total of 97 species of mollusks in three classes and 45 families have been reported from Manuel Antonio National Park. Sixty-nine species are new reports for the area: 53 gastropods, 14 bivalves and 2 chitons. There are probably more species of mollusks at Manuel Antonio National Park, than the 97 reported here, because some areas have not been adequately sampled (e.g., deep environments) and many micro-mollusks could not be identified.

  4. The influence of the energy density and other clinical parameters on bond strength of Er:YAG-conditioned dentin compared to conventional dentin adhesion.

    PubMed

    Gisler, Gottfried; Gutknecht, Norbert

    2014-01-01

    The aim of this in vitro study was to optimise clinical parameters and the energy density of Er:YAG laser-conditioned dentin for class V fillings. Shear tests in three test series were conducted with 24 freshly extracted human third molars as samples for each series. For every sample, two orofacial and two approximal dentin surfaces were prepared. The study design included different laser energies, a thin vs a thick bond layer, the influence of adhesives as well as one-time- vs two-time treatment. The best results with Er:YAG-conditioned dentin were obtained with fluences just above the ablation threshold (5.3 J/cm(2)) in combination with a self-etch adhesive, a thin bond layer and when bond and composite were two-time cured. Dentin conditioned this way reached an averaged bond strength of 23.32 MPa (SD 5.3) and 24.37 MPa (SD 6.06) for two independent test surfaces while showing no statistical significance to conventional dentin adhesion and two-time treatment with averaged bond strength of 24.93 MPa (SD 11.51). Significant reduction of bond strength with Er:YAG-conditioned dentin was obtained when using either a thick bond layer, twice the laser energy (fluence 10.6 J/cm(2)) or with no dentin adhesive. The discussion showed clearly that in altered (sclerotic) dentin, e.g. for class V fillings of elderly patients, bond strengths in conventional dentin adhesion are constantly reduced due to the change of the responsibles, bond giving dentin structures, whereas for Er:YAG-conditioned dentin, the only way to get an optimal microretentive bond pattern is a laser fluence just above the ablation threshold of sclerotic dentin.

  5. Complete Defluorination of Perfluorinated Compounds by Hydrated Electrons Generated from 3-Indole-acetic-acid in Organomodified Montmorillonite

    PubMed Central

    Tian, Haoting; Gao, Juan; Li, Hui; Boyd, Stephen A.; Gu, Cheng

    2016-01-01

    Here we describe a unique process that achieves complete defluorination and decomposition of perfluorinated compounds (PFCs) which comprise one of the most recalcitrant and widely distributed classes of toxic pollutant chemicals found in natural environments. Photogenerated hydrated electrons derived from 3-indole-acetic-acid within an organomodified clay induce the reductive defluorination of co-sorbed PFCs. The process proceeds to completion within a few hours under mild reaction conditions. The organomontmorillonite clay promotes the formation of highly reactive hydrated electrons by stabilizing indole radical cations formed upon photolysis, and prevents their deactivation by reaction with protons or oxygen. In the constrained interlayer regions of the clay, hydrated electrons and co-sorbed PFCs are brought in close proximity thereby increasing the probability of reaction. This novel green chemistry provides the basis for in situ and ex situ technologies to treat one of the most troublesome, recalcitrant and ubiquitous classes of environmental contaminants, i.e., PFCs, utilizing innocuous reagents, naturally occurring materials and mild reaction conditions. PMID:27608658

  6. Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin

    USGS Publications Warehouse

    Bar-Massada, A.; Radeloff, V.C.; Stewart, S.I.; Hawbaker, T.J.

    2009-01-01

    The rapid growth of housing in and near the wildland-urban interface (WUI) increases wildfire risk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfire risk to a 60,000 ha WUI area in northwestern Wisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfire risk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfire risk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfire risk and those most vulnerable under extreme weather conditions. ?? 2009 Elsevier B.V.

  7. A hidden Markov model approach to neuron firing patterns.

    PubMed

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-11-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.

  8. Ecological distribution and crude density of breeding birds on prairie wetlands

    USGS Publications Warehouse

    Kantrud, H.A.; Stewart, R.E.

    1984-01-01

    Breeding populations of 28 species of wetland-dwelling birds other than waterfowl (Anatidae) were censused on 1,321 wetlands lying within the prairie pothole region of North Dakota. Ecological distribution and two crude measures of relative density were calculated for the 22 commonest species using eight wetland classes. Semipermanent wetlands supported nearly two-thirds of the population and were used by all 22 species, whereas seasonal wetlands contained about one-third of the population and were used by 20 species Semipermanent, fen, and temporary wetlands contained highest bird densities on the basis of wetland area; on the basis of wetland unit, densities were highest on semipermanent, permanent, alkali, and fen wetlands. The highest ranking of semipermanent wetlands by all three measures of use was probably because these wetlands, as well as being relatively numerous and large, were vegetatively diverse. The fairly large proportion of the bird population supported by seasonal wetlands was a result of wetland abundance and moderate vegetative diversity. Increased vegetative diversity results from the development of characteristic zones of hydrophytes at sites where water persists longer during the growing season. Frequent cultivation of prairie wetlands results in the replacement of tall, robust perennials by bare soil or stands of short, weak-stemmed annuals that likely are unattractive to nesting birds.

  9. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.

    PubMed

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-02-14

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).

  10. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise

    PubMed Central

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-01-01

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590

  11. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  12. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  13. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  14. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  15. Bayesian isotonic density regression

    PubMed Central

    Wang, Lianming; Dunson, David B.

    2011-01-01

    Density regression models allow the conditional distribution of the response given predictors to change flexibly over the predictor space. Such models are much more flexible than nonparametric mean regression models with nonparametric residual distributions, and are well supported in many applications. A rich variety of Bayesian methods have been proposed for density regression, but it is not clear whether such priors have full support so that any true data-generating model can be accurately approximated. This article develops a new class of density regression models that incorporate stochastic-ordering constraints which are natural when a response tends to increase or decrease monotonely with a predictor. Theory is developed showing large support. Methods are developed for hypothesis testing, with posterior computation relying on a simple Gibbs sampler. Frequentist properties are illustrated in a simulation study, and an epidemiology application is considered. PMID:22822259

  16. Comparative study of probability distribution distances to define a metric for the stability of multi-source biomedical research data.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-01-01

    Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.

  17. Laser-induced damage of intrinsic and extrinsic defects by picosecond pulses on multilayer dielectric coatings for petawatt-class lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negres, Raluca A.; Carr, Christopher W.; Laurence, Ted A.

    2016-08-01

    Here, we describe a damage testing system and its use in investigating laser-induced optical damage initiated by both intrinsic and extrinsic precursors on multilayer dielectric coatings suitable for use in high-energy, large-aperture petawatt-class lasers. We employ small-area damage test methodologies to evaluate the intrinsic damage resistance of various coatings as a function of deposition methods and coating materials under simulated use conditions. In addition, we demonstrate that damage initiation by raster scanning at lower fluences and growth threshold testing are required to probe the density of extrinsic defects, which will limit large-aperture optics performance.

  18. Evaluation of Bone Thickness and Density in the Lower Incisors' Region in Adults with Different Types of Skeletal Malocclusion using Cone-beam Computed Tomography.

    PubMed

    Al-Masri, Maram M N; Ajaj, Mowaffak A; Hajeer, Mohammad Y; Al-Eed, Muataz S

    2015-08-01

    To evaluate the bone thickness and density in the lower incisors' region in orthodontically untreated adults, and to examine any possible relationship between thickness and density in different skeletal patterns using cone-beam computed tomography (CBCT). The CBCT records of 48 patients were obtained from the archive of orthodontic department comprising three groups of malocclusion (class I, II and III) with 16 patients in each group. Using OnDemand 3D software, sagittal sections were made for each lower incisor. Thicknesses and densities were measured at three levels of the root (cervical, middle and apical regions) from the labial and lingual sides. Accuracy and reliability tests were undertaken to assess the intraobserver reliability and to detect systematic error. Pearson correlation coefficients were calculated and one-way analysis of variance (ANOVA) was employed to detect significant differences among the three groups of skeletal malocclusion. Apical buccal thickness (ABT) in the four incisors was higher in class II and I patients than in class III patients (p < 0.05). There were significant differences between buccal and lingual surfaces at the apical and middle regions only in class II and III patients. Statistical differences were found between class I and II patients for the cervical buccal density (CBD) and between class II and III patients for apical buccal density (ABD). Relationship between bone thickness and density values ranged from strong at the cervical regions to weak at the apical regions. Sagittal skeletal patterns affect apical bone thickness and density at buccal surfaces of the four lower incisors' roots. Alveolar bone thickness and density increased from the cervical to the apical regions.

  19. A well-behaved class of charged analogue of Durgapal solution

    NASA Astrophysics Data System (ADS)

    Mehta, R. N.; Pant, Neeraj; Mahto, Dipo; Jha, J. S.

    2013-02-01

    We present a well behaved class of charged analogue of M.C. Durgapal (J. Phys. A, Math. Gen. 15:2637, 1982) solution. This solution describes charged fluid balls with positively finite central pressure, positively finite central density; their ratio is less than one and causality condition is obeyed at the centre. The outmarch of pressure, density, pressure-density ratio and the adiabatic speed of sound is monotonically decreasing, however, the electric intensity is monotonically increasing in nature. This solution gives us wide range of parameter for every positive value of n for which the solution is well behaved hence, suitable for modeling of super dense stars. Keeping in view of well behaved nature of this solution, one new class of solution is being studied extensively. Moreover, this class of solution gives us wide range of constant K (0≤ K≤2.2) for which the solution is well behaved hence, suitable for modeling of super dense stars like strange quark stars, neutron stars and pulsars. For this class of solution the mass of a star is maximized with all degree of suitability, compatible with quark stars, neutron stars and pulsars. By assuming the surface density ρ b =2×1014 g/cm3 (like, Brecher and Capocaso, Nature 259:377, 1976), corresponding to K=0 with X=0..235, the resulting well behaved model has the mass M=4.03 M Θ , radius r b =19.53 km and moment of inertia I=1.213×1046 g cm2; for K=1.5 with X=0.235, the resulting well behaved model has the mass M=4.43 M Θ , radius r b =18.04 km and moment of inertia I=1.136×1046 g cm2; for K=2.2 with X=0.235, the resulting well behaved model has the mass M=4.56 M Θ , radius r b =17.30 km and moment of inertia I=1.076×1046 g cm2. These values of masses and moment of inertia are found to be consistent with the crab pulsars.

  20. Applications of remote sensing, volume 1

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. ECHO successfully exploits the redundancy of states characteristics of sampled imagery of ground scenes to achieve better classification accuracy, reduce the number of classifications required, and reduce the variability of classification results. The information required to produce ECHO classifications are cell size, cell homogeneity, cell-to-field annexation parameters, input data, and a class conditional marginal density statistics deck.

  1. Stereo photo series for quantifying natural fuels Volume X: sagebrush with grass and ponderosa pine-juniper types in central Montana.

    Treesearch

    Roger D. Ottmar; Robert E. Vihnanek; Clinton S. Wright

    2007-01-01

    Two series of single and stereo photographs display a range of natural conditions and fuel loadings in sagebrush with grass and ponderosa pinejuniper types in central Montana. Each group of photos includes inventory information summarizing vegetation composition, structure, and loading; woody material loading and density by size class; forest floor depth and loading;...

  2. Computer models of social processes: the case of migration.

    PubMed

    Beshers, J M

    1967-06-01

    The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.

  3. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  4. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function.

    PubMed

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  5. Dental panoramic image analysis for enhancement biomarker of mandibular condyle for osteoporosis early detection

    NASA Astrophysics Data System (ADS)

    Suprijanto; Azhari; Juliastuti, E.; Septyvergy, A.; Setyagar, N. P. P.

    2016-03-01

    Osteoporosis is a degenerative disease characterized by low Bone Mineral Density (BMD). Currently, a BMD level is determined by Dual Energy X-ray Absorptiometry (DXA) at the lumbar vertebrae and femur. Previous studies reported that dental panoramic radiography image has potential information for early osteoporosis detection. This work reported alternative scheme, that consists of the determination of the Region of Interest (ROI) the condyle mandibular in the image as biomarker and feature extraction from ROI and classification of bone conditions. The minimum value of intensity in the cavity area is used to compensate an offset on the ROI. For feature extraction, the fraction of intensity values in the ROI that represent high bone density and the ROI total area is perfomed. The classification will be evaluated from the ability of each feature and its combinations for the BMD detection in 2 classes (normal and abnormal), with the artificial neural network method. The evaluation system used 105 panoramic image data from menopause women which consist of 36 training data and 69 test data that were divided into 2 classes. The 2 classes of classification obtained 88.0% accuracy rate and 88.0% sensitivity rate.

  6. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  7. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  8. Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans

    USGS Publications Warehouse

    Sedinger, J.S.; Chelgren, N.D.; Ward, D.H.; Lindberg, M.S.

    2008-01-01

    1. Patterns of temporary emigration (associated with non-breeding) are important components of variation in individual quality. Permanent emigration from the natal area has important implications for both individual fitness and local population dynamics. 2. We estimated both permanent and temporary emigration of black brent geese (Branta bernicla nigricans Lawrence) from the Tutakoke River colony, using observations of marked brent geese on breeding and wintering areas, and recoveries of ringed individuals by hunters. We used the likelihood developed by Lindberg, Kendall, Hines & Anderson 2001 (Combining band recovery data and Pollock's robust design to model temporary and permanent emigration. Biometrics, 57, 273-281) to assess hypotheses and estimate parameters. 3. Temporary emigration (the converse of breeding) varied among age classes up to age 5, and differed between individuals that bred in the previous years vs. those that did not. Consistent with the hypothesis of variation in individual quality, individuals with a higher probability of breeding in one year also had a higher probability of breeding the next year. 4. Natal fidelity of females ranged from 0.70 ?? 0.07-0.96 ?? 0.18 and averaged 0.83. In contrast to Lindberg et al. (1998), we did not detect a relationship between fidelity and local population density. Natal fidelity was negatively correlated with first-year survival, suggesting that competition among individuals of the same age for breeding territories influenced dispersal. Once females nested at the Tutakoke River, colony breeding fidelity was 1.0. 5. Our analyses show substantial variation in individual quality associated with fitness, which other analyses suggest is strongly influenced by early environment. Our analyses also suggest substantial interchange among breeding colonies of brent geese, as first shown by Lindberg et al. (1998).

  9. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  10. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    NASA Astrophysics Data System (ADS)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  11. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  12. Conditional net survival: Relevant prognostic information for colorectal cancer survivors. A French population-based study.

    PubMed

    Drouillard, Antoine; Bouvier, Anne-Marie; Rollot, Fabien; Faivre, Jean; Jooste, Valérie; Lepage, Côme

    2015-07-01

    Traditionally, survival estimates have been reported as survival from the time of diagnosis. A patient's probability of survival changes according to time elapsed since the diagnosis and this is known as conditional survival. The aim was to estimate 5-year net conditional survival in patients with colorectal cancer in a well-defined French population at yearly intervals up to 5 years. Our study included 18,300 colorectal cancers diagnosed between 1976 and 2008 and registered in the population-based digestive cancer registry of Burgundy (France). We calculated conditional 5-year net survival, using the Pohar Perme estimator, for every additional year survived after diagnosis from 1 to 5 years. The initial 5-year net survival estimates varied between 89% for stage I and 9% for advanced stage cancer. The corresponding 5-year net survival for patients alive after 5 years was 95% and 75%. Stage II and III patients who survived 5 years had a similar probability of surviving 5 more years, respectively 87% and 84%. For survivors after the first year following diagnosis, five-year conditional net survival was similar regardless of age class and period of diagnosis. For colorectal cancer survivors, conditional net survival provides relevant and complementary prognostic information for patients and clinicians. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  13. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  14. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  15. Testing critical point universality along the λ-line

    NASA Astrophysics Data System (ADS)

    Nissen, J. A.; Swanson, D. R.; Geng, Z. K.; Dohm, V.; Israelsson, U. E.; DiPirro, M. J.; Lipa, J. A.

    1998-02-01

    We are currently building a prototype for a new test of critical-point universality at the lambda transition in 4He, which is to be performed in microgravity conditions. The flight experiment will measure the second-sound velocity as a function of temperature at pressures from 1 to 30 bars in the region close to the lambda line. The critical exponents and other parameters characterizing the behavior of the superfluid density will be determined from the measurements. The microgravity measurements will be quite extensive, probably taking 30 days to complete. In addition to the superfluid density, some measurements of the specific heat will be made using the low-g simulator at the Jet Propulsion Laboratory. The results of the superfluid density and specific heat measurements will be used to compare the asymptotic exponents and other universal aspects of the superfluid density with the theoretical predictions currently established by renormalization group techniques.

  16. Study of discharge cleaning process in JIPP T-2 Torus by residual gas analyzer

    NASA Astrophysics Data System (ADS)

    Noda, N.; Hirokura, S.; Taniguchi, Y.; Tanahashi, S.

    1982-12-01

    During discharge cleaning, decay time of water vapor pressure changes when the pressure reaches a certain level. A long decay time observed in the later phase can be interpreted as a result of a slow deoxidization rate of chromium oxide, which may dominate the cleaning process in this phase. Optimization of plasma density for the cleaning is discussed comparing the experimental results on density dependence of water vapor pressure with a result based on a zero dimensional calculation for particle balance. One of the essential points for effective cleaning is the raising of the electron density of the plasma high enough that the dissociation loss rate of H2O is as large as the sticking loss rate. A density as high as 10 to the 11th power/cu cm is required for a clean surface condition where sticking probability is presumed to be around 0.5.

  17. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  18. SUGGEL: A Program Suggesting the Orbital Angular Momentum of a Neutron Resonance from the Magnitude of its Neutron Width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.Y.

    2001-02-02

    The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less

  19. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  20. Profit intensity and cases of non-compliance with the law of demand/supply

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek

    2017-05-01

    We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.

  1. Extracting the distribution of laser damage precursors on fused silica surfaces for 351 nm, 3 ns laser pulses at high fluences (20-150 J/cm2).

    PubMed

    Laurence, Ted A; Bude, Jeff D; Ly, Sonny; Shen, Nan; Feit, Michael D

    2012-05-07

    Surface laser damage limits the lifetime of optics for systems guiding high fluence pulses, particularly damage in silica optics used for inertial confinement fusion-class lasers (nanosecond-scale high energy pulses at 355 nm/3.5 eV). The density of damage precursors at low fluence has been measured using large beams (1-3 cm); higher fluences cannot be measured easily since the high density of resulting damage initiation sites results in clustering. We developed automated experiments and analysis that allow us to damage test thousands of sites with small beams (10-30 µm), and automatically image the test sites to determine if laser damage occurred. We developed an analysis method that provides a rigorous connection between these small beam damage test results of damage probability versus laser pulse energy and the large beam damage results of damage precursor densities versus fluence. We find that for uncoated and coated fused silica samples, the distribution of precursors nearly flattens at very high fluences, up to 150 J/cm2, providing important constraints on the physical distribution and nature of these precursors.

  2. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  3. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  4. Identifying sources of heterogeneity in capture probabilities: An example using the Great Tit Parus major

    USGS Publications Warehouse

    Senar, J.C.; Conroy, M.J.; Carrascal, L.M.; Domenech, J.; Mozetich, I.; Uribe, F.

    1999-01-01

    Heterogeneous capture probabilities are a common problem in many capture-recapture studies. Several methods of detecting the presence of such heterogeneity are currently available, and stratification of data has been suggested as the standard method to avoid its effects. However, few studies have tried to identify sources of heterogeneity, or whether there are interactions among sources. The aim of this paper is to suggest an analytical procedure to identify sources of capture heterogeneity. We use data on the sex and age of Great Tits captured in baited funnel traps, at two localities differing in average temperature. We additionally use 'recapture' data obtained by videotaping at feeder (with no associated trap), where the tits ringed with different colours were recorded. This allowed us to test whether individuals in different classes (age, sex and condition) are not trapped because of trap shyness or because o a reduced use of the bait. We used logistic regression analysis of the capture probabilities to test for the effects of age, sex, condition, location and 'recapture method. The results showed a higher recapture probability in the colder locality. Yearling birds (either males or females) had the highest recapture prob abilities, followed by adult males, while adult females had the lowest recapture probabilities. There was no effect of the method of 'recapture' (trap or video tape), which suggests that adult females are less often captured in traps no because of trap-shyness but because of less dependence on supplementary food. The potential use of this methodological approach in other studies is discussed.

  5. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  6. [Influence of individual characteristics and working conditions in the level of injury accident at work by registered in Andalusia, Spain, in 2003].

    PubMed

    Muñoz, Julia Bolívar; Codina, Antonio Daponte; Cruz, Laura López; Rodríguez, Inmaculada Mateo

    2009-01-01

    The study of the severity of occupational injuries is very important for the establishment of prevention plans. The aim of this paper is to analyze the distribution of occupational injuries by a) individual factors b) work place characteristics and c) working conditions and to analyze the severity of occupational injuries by this characteristics in men and women in Andalusia. Injury data came from the accident registry of the Ministry of Labor and social issues in 2003. Dependent variable: the severity of the injury: slight, serious, very serious and fatal; the independent variables: the characteristics of the worker, company data, and the accident itself. Bivariate and multivariate analysis were done to estimate the probability of serious, very serious and fatal injury, related to other variables, through odds ratio (OR), and using a 95% confidence interval (CI 95%). The 82.4% of the records were men and 17.6% were women, of whom the 78,1% are unskilled manual workers, compared to 44.9% of men. The men belonging to class I have a higher probability of more severe lesions (OR = 1.67, 95% CI = 1.17-2.38). The severity of the injury is associated with sex, age and type of injury. In men it is also related with the professional situation, the place where the accident happened, an unusual job, the size and the characteristics of the company and the social class, and in women with the sector.

  7. Velocity selection in a Doppler-broadened ensemble of atoms interacting with a monochromatic laser beam

    NASA Astrophysics Data System (ADS)

    Hughes, Ifan G.

    2018-03-01

    There is extensive use of monochromatic lasers to select atoms with a narrow range of velocities in many atomic physics experiments. For the commonplace situation of the inhomogeneous Doppler-broadened (Gaussian) linewidth exceeding the homogeneous (Lorentzian) natural linewidth by typically two orders of magnitude, a substantial narrowing of the velocity class of atoms interacting with the light can be achieved. However, this is not always the case, and here we show that for a certain parameter regime there is essentially no selection - all of the atoms interact with the light in accordance with the velocity probability density. An explanation of this effect is provided, emphasizing the importance of the long tail of the constituent Lorentzian distribution in a Voigt profile.

  8. Radar polarimetry - Analysis tools and applications

    NASA Technical Reports Server (NTRS)

    Evans, Diane L.; Farr, Tom G.; Van Zyl, Jakob J.; Zebker, Howard A.

    1988-01-01

    The authors have developed several techniques to analyze polarimetric radar data from the NASA/JPL airborne SAR for earth science applications. The techniques determine the heterogeneity of scatterers with subregions, optimize the return power from these areas, and identify probable scattering mechanisms for each pixel in a radar image. These techniques are applied to the discrimination and characterization of geologic surfaces and vegetation cover, and it is found that their utility varies depending on the terrain type. It is concluded that there are several classes of problems amenable to single-frequency polarimetric data analysis, including characterization of surface roughness and vegetation structure, and estimation of vegetation density. Polarimetric radar remote sensing can thus be a useful tool for monitoring a set of earth science parameters.

  9. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  10. Local and neighboring patch conditions alter sex-specific movement in banana weevils.

    PubMed

    Carval, Dominique; Perrin, Benjamin; Duyck, Pierre-François; Tixier, Philippe

    2015-12-01

    Understanding the mechanisms underlying the movements and spread of a species over time and space is a major concern of ecology. Here, we assessed the effects of an individual's sex and the density and sex ratio of conspecifics in the local and neighboring environment on the movement probability of the banana weevil, Cosmopolites sordidus. In a "two patches" experiment, we used radiofrequency identification tags to study the C. sordidus movement response to patch conditions. We showed that local and neighboring densities of conspecifics affect the movement rates of individuals but that the density-dependent effect can be either positive or negative depending on the relative densities of conspecifics in local and neighboring patches. We demonstrated that sex ratio also influences the movement of C. sordidus, that is, the weevil exhibits nonfixed sex-biased movement strategies. Sex-biased movement may be the consequence of intrasexual competition for resources (i.e., oviposition sites) in females and for mates in males. We also detected a high individual variability in the propensity to move. Finally, we discuss the role of demographic stochasticity, sex-biased movement, and individual heterogeneity in movement on the colonization process.

  11. AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA

    NASA Technical Reports Server (NTRS)

    Cheeseman, P. C.

    1994-01-01

    The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5.4, VAX/Ultrix v4.1, and MIPS/Ultrix v4, rev. 179; and on the Macintosh personal computer. The minimum Macintosh required is the IIci. This program will not run under CMU Common Lisp or VAX/VMS DEC Common Lisp. A minimum of 8Mb of RAM is required for Macintosh platforms and 16Mb for workstations. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 3.5 inch diskette in Macintosh format. An electronic copy of the documentation is included on the distribution medium. AUTOCLASS was developed between March 1988 and March 1992. It was initially released in May 1991. Sun is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation. Macintosh is a trademark of Apple Computer, Inc. Allegro CL is a registered trademark of Franz, Inc.

  12. Harvesting, predation and competition effects on a red coral population

    NASA Astrophysics Data System (ADS)

    Abbiati, M.; Buffoni, G.; Caforio, G.; Di Cola, G.; Santangelo, G.

    A Corallium rubrum population, dwelling in the Ligurian Sea, has been under observation since 1987. Biometric descriptors of colonies (base diameter, weight, number of polyps, number of growth rings) have been recorded and correlated. The population size structure was obtained by distributing the colonies into diameter classes, each size class representing the average annual increment of diameter growth. The population was divided into ten classes, including a recruitment class. This size structure showed a fairly regular trend in the first four classes. The irregularity of survival in the older classes agreed with field observations on harvesting and predation. Demographic parameters such as survival, growth plasticity and natality coefficients were estimated from the experimental data. On this basis a discrete nonlinear model was implemented. The model is based on a kind of density-dependent Leslie matrix, where the feedback term only occurs in survival of the first class; the recruitment function is assumed to be dependent on the total biomass and related to inhibiting effects due to competitive interactions. Stability analysis was applied to steady-state solutions. Numerical simulations of population evolution were carried out under different conditions. The dynamics of settlement and the effects of disturbances such as harvesting, predation and environmental variability were studied.

  13. Can the dissociative PTSD subtype be identified across two distinct trauma samples meeting caseness for PTSD?

    PubMed

    Hansen, Maj; Műllerová, Jana; Elklit, Ask; Armour, Cherie

    2016-08-01

    For over a century, the occurrence of dissociative symptoms in connection to traumatic exposure has been acknowledged in the scientific literature. Recently, the importance of dissociation has also been recognized in the long-term traumatic response within the DSM-5 nomenclature. Several studies have confirmed the existence of the dissociative posttraumatic stress disorder (PTSD) subtype. However, there is a lack of studies investigating latent profiles of PTSD solely in victims with PTSD. This study investigates the possible presence of PTSD subtypes using latent class analysis (LCA) across two distinct trauma samples meeting caseness for DSM-5 PTSD based on self-reports (N = 787). Moreover, we assessed if a number of risk factors resulted in an increased probability of membership in a dissociative compared with a non-dissociative PTSD class. The results of LCA revealed a two-class solution with two highly symptomatic classes: a dissociative class and a non-dissociative class across both samples. Increased emotion-focused coping increased the probability of individuals being grouped into the dissociative class across both samples. Social support reduced the probability of individuals being grouped into the dissociative class but only in the victims of motor vehicle accidents (MVAs) suffering from whiplash. The results are discussed in light of their clinical implications and suggest that the dissociative subtype can be identified in victims of incest and victims of MVA suffering from whiplash meeting caseness for DSM-5 PTSD.

  14. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  15. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  16. DETERMINING TYPE Ia SUPERNOVA HOST GALAXY EXTINCTION PROBABILITIES AND A STATISTICAL APPROACH TO ESTIMATING THE ABSORPTION-TO-REDDENING RATIO R{sub V}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less

  17. Calibrated birth-death phylogenetic time-tree priors for bayesian inference.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2015-05-01

    Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  18. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  19. Measuring the Outflow Properties of FeLoBAL Quasars

    NASA Astrophysics Data System (ADS)

    Dabbieri, Collin; Choi, Hyunseop; MacInnis, Francis; Leighly, Karen; Terndrup, Donald

    2018-01-01

    Roughly 20 percent of the quasar population shows broad absorption lines, which are indicators of an energetic wind. Within the broad absorption line class of quasars exist FeLoBAL quasars, which show strong absorption lines from the Fe II and Fe III transitions as well as other low-ionization lines. FeLoBALs are of particular interest because they are thought to possibly be a short-lived stage in a quasar's life where it expels its shroud of gas and dust. This means the winds we see from FeLoBALs are one manifestation of galactic feedback. This idea is supported by Farrah et al. (2012) who found an anti correlation between outflow strength and contribution from star formation to the total IR luminosity of the host galaxy when examining a sample of FeLoBAL quasars. We analyze the sample of 26 FeLoBALs from Farrah et al. (2012) in order to measure the properties of their outflows, including ionization, density, column density and covering fraction. The absorption and continuum profiles of these objects are modeled using SimBAL, a program which creates synthetic spectra using a grid of Cloudy models. A Monte-Carlo method is employed to determine posterior probabilities for the physical parameters of the outflow. From these probabilities we extract the distance of the outflow, the mass outflow rate and the kinetic luminosity. We demonstrate SimBAL is capable of modeling a wide range of spectral morphologies. From the 26 objects studied we observe interesting correlations between ionization parameter, distance and density. Analysis of our sample also suggests a dearth of objects with velocity widths greater than or equal to 300 km/s at distances greater than or equal to 100 parsecs.

  20. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  1. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  2. Quantum-shutter approach to tunneling time scales with wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Norifumi; Garcia-Calderon, Gaston; Villavicencio, Jorge

    2005-07-15

    The quantum-shutter approach to tunneling time scales [G. Garcia-Calderon and A. Rubio, Phys. Rev. A 55, 3361 (1997)], which uses a cutoff plane wave as the initial condition, is extended to consider certain type of wave packet initial conditions. An analytical expression for the time-evolved wave function is derived. The time-domain resonance, the peaked structure of the probability density (as the function of time) at the exit of the barrier, originally found with the cutoff plane wave initial condition, is studied with the wave packet initial conditions. It is found that the time-domain resonance is not very sensitive to themore » width of the packet when the transmission process occurs in the tunneling regime.« less

  3. A Framework for Inferring Taxonomic Class of Asteroids.

    NASA Technical Reports Server (NTRS)

    Dotson, J. L.; Mathias, D. L.

    2017-01-01

    Introduction: Taxonomic classification of asteroids based on their visible / near-infrared spectra or multi band photometry has proven to be a useful tool to infer other properties about asteroids. Meteorite analogs have been identified for several taxonomic classes, permitting detailed inference about asteroid composition. Trends have been identified between taxonomy and measured asteroid density. Thanks to NEOWise (Near-Earth-Object Wide-field Infrared Survey Explorer) and Spitzer (Spitzer Space Telescope), approximately twice as many asteroids have measured albedos than the number with taxonomic classifications. (If one only considers spectroscopically determined classifications, the ratio is greater than 40.) We present a Bayesian framework that provides probabilistic estimates of the taxonomic class of an asteroid based on its albedo. Although probabilistic estimates of taxonomic classes are not a replacement for spectroscopic or photometric determinations, they can be a useful tool for identifying objects for further study or for asteroid threat assessment models. Inputs and Framework: The framework relies upon two inputs: the expected fraction of each taxonomic class in the population and the albedo distribution of each class. Luckily, numerous authors have addressed both of these questions. For example, the taxonomic distribution by number, surface area and mass of the main belt has been estimated and a diameter limited estimate of fractional abundances of the near earth asteroid population was made. Similarly, the albedo distributions for taxonomic classes have been estimated for the combined main belt and NEA (Near Earth Asteroid) populations in different taxonomic systems and for the NEA population specifically. The framework utilizes a Bayesian inference appropriate for categorical data. The population fractions provide the prior while the albedo distributions allow calculation of the likelihood an albedo measurement is consistent with a given taxonomic class. These inputs allows calculation of the probability an asteroid with a specified albedo belongs to any given taxonomic class.

  4. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  5. Spatial distribution and optimal harvesting of an age-structured population in a fluctuating environment.

    PubMed

    Engen, Steinar; Lee, Aline Magdalena; Sæther, Bernt-Erik

    2018-02-01

    We analyze a spatial age-structured model with density regulation, age specific dispersal, stochasticity in vital rates and proportional harvesting. We include two age classes, juveniles and adults, where juveniles are subject to logistic density dependence. There are environmental stochastic effects with arbitrary spatial scales on all birth and death rates, and individuals of both age classes are subject to density independent dispersal with given rates and specified distributions of dispersal distances. We show how to simulate the joint density fields of the age classes and derive results for the spatial scales of all spatial autocovariance functions for densities. A general result is that the squared scale has an additive term equal to the squared scale of the environmental noise, corresponding to the Moran effect, as well as additive terms proportional to the dispersal rate and variance of dispersal distance for the age classes and approximately inversely proportional to the strength of density regulation. We show that the optimal harvesting strategy in the deterministic case is to harvest only juveniles when their relative value (e.g. financial) is large, and otherwise only adults. With increasing environmental stochasticity there is an interval of increasing length of values of juveniles relative to adults where both age classes should be harvested. Harvesting generally tends to increase all spatial scales of the autocovariances of densities. Copyright © 2017. Published by Elsevier Inc.

  6. Climate drives inter-annual variability in probability of high severity fire occurrence in the western United States

    NASA Astrophysics Data System (ADS)

    Keyser, Alisa; Westerling, Anthony LeRoy

    2017-05-01

    A long history of fire suppression in the western United States has significantly changed forest structure and ecological function, leading to increasingly uncharacteristic fires in terms of size and severity. Prior analyses of fire severity in California forests showed that time since last fire and fire weather conditions predicted fire severity very well, while a larger regional analysis showed that topography and climate were important predictors of high severity fire. There has not yet been a large-scale study that incorporates topography, vegetation and fire-year climate to determine regional scale high severity fire occurrence. We developed models to predict the probability of high severity fire occurrence for the western US. We predict high severity fire occurrence with some accuracy, and identify the relative importance of predictor classes in determining the probability of high severity fire. The inclusion of both vegetation and fire-year climate predictors was critical for model skill in identifying fires with high fractional fire severity. The inclusion of fire-year climate variables allows this model to forecast inter-annual variability in areas at future risk of high severity fire, beyond what slower-changing fuel conditions alone can accomplish. This allows for more targeted land management, including resource allocation for fuels reduction treatments to decrease the risk of high severity fire.

  7. A general stochastic model for sporophytic self-incompatibility.

    PubMed

    Billiard, Sylvain; Tran, Viet Chi

    2012-01-01

    Disentangling the processes leading populations to extinction is a major topic in ecology and conservation biology. The difficulty to find a mate in many species is one of these processes. Here, we investigate the impact of self-incompatibility in flowering plants, where several inter-compatible classes of individuals exist but individuals of the same class cannot mate. We model pollen limitation through different relationships between mate availability and fertilization success. After deriving a general stochastic model, we focus on the simple case of distylous plant species where only two classes of individuals exist. We first study the dynamics of such a species in a large population limit and then, we look for an approximation of the extinction probability in small populations. This leads us to consider inhomogeneous random walks on the positive quadrant. We compare the dynamics of distylous species to self-fertile species with and without inbreeding depression, to obtain the conditions under which self-incompatible species can be less sensitive to extinction while they can suffer more pollen limitation. © Springer-Verlag 2011

  8. Clustering of Multiple Risk Behaviors Among a Sample of 18-Year-Old Australians and Associations With Mental Health Outcomes: A Latent Class Analysis.

    PubMed

    Champion, Katrina E; Mather, Marius; Spring, Bonnie; Kay-Lambkin, Frances; Teesson, Maree; Newton, Nicola C

    2018-01-01

    Risk behaviors commonly co-occur, typically emerge in adolescence, and become entrenched by adulthood. This study investigated the clustering of established (physical inactivity, diet, smoking, and alcohol use) and emerging (sedentary behavior and sleep) chronic disease risk factors among young Australian adults, and examined how clusters relate to mental health. The sample was derived from the long-term follow-up of a cohort of Australians. Participants were initially recruited at school as part of a cluster randomized controlled trial. A total of 853 participants (M age  = 18.88 years, SD = 0.42) completed an online self-report survey as part of the 5-year follow-up for the RCT. The survey assessed six behaviors (binge drinking and smoking in the past 6 months, moderate-to-vigorous physical activity/week, sitting time/day, fruit and vegetable intake/day, and sleep duration/night). Each behavior was represented by a dichotomous variable reflecting adherence to national guidelines. Exploratory analyses were conducted. Clusters were identified using latent class analysis. Three classes emerged: "moderate risk" (moderately likely to binge drink and not eat enough fruit, high probability of insufficient vegetable intake; Class 1, 52%); "inactive, non-smokers" (high probabilities of not meeting guidelines for physical activity, sitting time and fruit/vegetable consumption, very low probability of smoking; Class 2, 24%), and "smokers and binge drinkers" (high rates of smoking and binge drinking, poor fruit/vegetable intake; Class 3, 24%). There were significant differences between the classes in terms of psychological distress ( p  = 0.003), depression ( p  < 0.001), and anxiety ( p  = 0.003). Specifically, Class 3 ("smokers and binge drinkers") showed higher levels of distress, depression, and anxiety than Class 1 ("moderate risk"), while Class 2 ("inactive, non-smokers") had greater depression than the "moderate risk" group. Results indicate that risk behaviors are prevalent and clustered in 18-year old Australians. Mental health symptoms were significantly greater among the two classes that were characterized by high probabilities of engaging in multiple risk behaviors (Classes 2 and 3). An examination of the clustering of lifestyle risk behaviors is important to guide the development of preventive interventions. Our findings reinforce the importance of delivering multiple health interventions to reduce disease risk and improve mental well-being.

  9. Reconstructing the deadly eruptive events of 1790 CE at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Swanson, Don; Weaver, Samantha J; Houghton, Bruce F.

    2014-01-01

    A large number of people died during an explosive eruption of Kīlauea Volcano in 1790 CE. Detailed study of the upper part of the Keanakāko‘i Tephra has identified the deposits that may have been responsible for the deaths. Three successive units record shifts in eruption style that agree well with accounts of the eruption based on survivor interviews 46 yr later. First, a wet fall of very fine, accretionary-lapilli–bearing ash created a “cloud of darkness.” People walked across the soft deposit, leaving footprints as evidence. While the ash was still unconsolidated, lithic lapilli fell into it from a high eruption column that was seen from 90 km away. Either just after this tephra fall or during its latest stage, pulsing dilute pyroclastic density currents, probably products of a phreatic eruption, swept across the western flank of Kīlauea, embedding lapilli in the muddy ash and crossing the trail along which the footprints occur. The pyroclastic density currents were most likely responsible for the fatalities, as judged from the reported condition and probable location of the bodies. This reconstruction is relevant today, as similar eruptions will probably occur in the future at Kīlauea and represent its most dangerous and least predictable hazard.

  10. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  11. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  12. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  13. Breast density characterization using texton distributions.

    PubMed

    Petroudi, Styliani; Brady, Michael

    2011-01-01

    Breast density has been shown to be one of the most significant risks for developing breast cancer, with women with dense breasts at four to six times higher risk. The Breast Imaging Reporting and Data System (BI-RADS) has a four class classification scheme that describes the different breast densities. However, there is great inter and intra observer variability among clinicians in reporting a mammogram's density class. This work presents a novel texture classification method and its application for the development of a completely automated breast density classification system. The new method represents the mammogram using textons, which can be thought of as the building blocks of texture under the operational definition of Leung and Malik as clustered filter responses. The new proposed method characterizes the mammographic appearance of the different density patterns by evaluating the texton spatial dependence matrix (TDSM) in the breast region's corresponding texton map. The TSDM is a texture model that captures both statistical and structural texture characteristics. The normalized TSDM matrices are evaluated for mammograms from the different density classes and corresponding texture models are established. Classification is achieved using a chi-square distance measure. The fully automated TSDM breast density classification method is quantitatively evaluated on mammograms from all density classes from the Oxford Mammogram Database. The incorporation of texton spatial dependencies allows for classification accuracy reaching over 82%. The breast density classification accuracy is better using texton TSDM compared to simple texton histograms.

  14. A hidden Markov model approach to neuron firing patterns.

    PubMed Central

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-01-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581

  15. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  16. Use of portable antennas to estimate abundance of PIT-tagged fish in small streams: Factors affecting detection probability

    USGS Publications Warehouse

    O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.

    2010-01-01

    Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.

  17. Does movement behaviour predict population densities? A test with 25 butterfly species.

    PubMed

    Schultz, Cheryl B; Pe'er, B Guy; Damiani, Christine; Brown, Leone; Crone, Elizabeth E

    2017-03-01

    Diffusion, which approximates a correlated random walk, has been used by ecologists to describe movement, and forms the basis for many theoretical models. However, it is often criticized as too simple a model to describe animal movement in real populations. We test a key prediction of diffusion models, namely, that animals should be more abundant in land cover classes through which they move more slowly. This relationship between density and diffusion has rarely been tested across multiple species within a given landscape. We estimated diffusion rates and corresponding densities of 25 Israeli butterfly species from flight path data and visual surveys. The data were collected across 19 sites in heterogeneous landscapes with four land cover classes: semi-natural habitat, olive groves, wheat fields and field margins. As expected from theory, species tended to have higher densities in land cover classes through which they moved more slowly and lower densities in land cover classes through which they moved more quickly. Two components of movement (move length and turning angle) were not associated with density, nor was expected net squared displacement. Move time, however, was associated with density, and animals spent more time per move step in areas with higher density. The broad association we document between movement behaviour and density suggests that diffusion is a good first approximation of movement in butterflies. Moreover, our analyses demonstrate that dispersal is not a species-invariant trait, but rather one that depends on landscape context. Thus, land cover classes with high diffusion rates are likely to have low densities and be effective conduits for movement. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  18. Implementation and Research on the Operational Use of the Mesoscale Prediction Model COAMPS in Poland

    DTIC Science & Technology

    2007-09-30

    COAMPS model. Bogumil Jakubiak, University of Warsaw – participated in EGU General Assembly , Vienna Austria 15-20 April 2007 giving one oral and two...conditional forecast (background) error probability density function using an ensemble of the model forecast to generate background error statistics...COAMPS system on ICM machines at Warsaw University for the purpose of providing operational support to the general public using the ICM meteorological

  19. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  20. Entomologic considerations in the study of onchocerciasis transmission.

    PubMed

    Vargas, L; Díaz-Nájera, A

    1980-01-01

    The entomological resources utilized for a better understanding of Onchocerca volvulus transmission are discussed in this paper. Vector density, anthropohilia, gonotrophic cycyle, parous condition longevity and probability of survival in days after the infectious meal are assessed here in order to integrate an overall picture. The concept of vectorial capacity is developed stressing the quantitative aspects. Parasitism of the black-flies by filariae that are doubtfully identified as O. volvulus is also mentioned here.

  1. BAOBAB (Big And Outrageously Bold Asteroid Belt) Project

    NASA Technical Reports Server (NTRS)

    Mcfadden, L. A.; Thomas, C. A; Englander, J. A.; Ruesch, O.; Hosseini, S.; Goossens, S. J.; Mazarico, E. M.; Schmerr, N.

    2017-01-01

    One of the intriguing results of NASA's Dawn mission is the composition and structure of the Main Asteroid Belt's only known dwarf planet, Ceres [1]. It has a top layer of dehydrated clays and salts [2] and an icy-rocky mantle [3,4]. It is widely known that the asteroid belt failed to accrete as a planet by resonances between the Sun and Jupiter. About 20-30 asteroids >100 km diameter are probably differentiated protoplanets [5]. 1) how many more and which ones are fragments of protoplanets? 2) How many and which ones are primordial rubble piles left over from condensation of the solar nebula? 3) How would we go about gaining better and more complete characterization of the mass, interior structure and composition of the Main Belt asteroid population? 4) What is the relationship between asteroids and ocean worlds? Bulk parameters such as the mass, density, and porosity, are important to characterize the structure of any celestial body, and for asteroids in particular, they can shed light on the conditions in the early solar system. Asteroid density estimates exist but currently they are often based on assumed properties of taxonomic classes, or through astronomical survey data where interactions with asteroids are weak at best resulting in large measurement uncertainty. We only have direct density estimates from spacecraft encounters for a few asteroids at this time. Knowledge of the asteroids is significant not only to understand their role in solar system workings, but also to assess their potential as space resources, as impact hazards on Earth, or even as harboring life forms. And for the distant future, we want to know if the idea put forth in a contest sponsored by Physics Today, to surface the asteroids into highly reflecting, polished surfaces and use them as a massively segmented mirror for astrophysical exploration [6], is feasible.

  2. Efficient and faithful remote preparation of arbitrary three- and four-particle -class entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu

    2015-06-01

    We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.

  3. Patient factors and quality of life outcomes differ among four subgroups of oncology patients based on symptom occurrence.

    PubMed

    Astrup, Guro Lindviksmoen; Hofsø, Kristin; Bjordal, Kristin; Guren, Marianne Grønlie; Vistad, Ingvild; Cooper, Bruce; Miaskowski, Christine; Rustøen, Tone

    2017-03-01

    Reviews of the literature on symptoms in oncology patients undergoing curative treatment, as well as patients receiving palliative care, suggest that they experience multiple, co-occurring symptoms and side effects. The purposes of this study were to determine if subgroups of oncology patients could be identified based on symptom occurrence rates and if these subgroups differed on a number of demographic and clinical characteristics, as well as on quality of life (QoL) outcomes. Latent class analysis (LCA) was used to identify subgroups (i.e. latent classes) of patients with distinct symptom experiences based on the occurrence rates for the 13 most common symptoms from the Memorial Symptom Assessment Scale. In total, 534 patients with breast, head and neck, colorectal, or ovarian cancer participated. Four latent classes of patients were identified based on probability of symptom occurrence: all low class [i.e. low probability for all symptoms (n = 152)], all high class (n = 149), high psychological class (n = 121), and low psychological class (n = 112). Patients in the all high class were significantly younger compared with patients in the all low class. Furthermore, compared to the other three classes, patients in the all high class had lower functional status and higher comorbidity scores, and reported poorer QoL scores. Patients in the high and low psychological classes had a moderate probability of reporting physical symptoms. Patients in the low psychological class reported a higher number of symptoms, a lower functional status, and poorer physical and total QoL scores. Distinct subgroups of oncology patients can be identified based on symptom occurrence rates. Patient characteristics that are associated with these subgroups can be used to identify patients who are at greater risk for multiple co-occurring symptoms and diminished QoL, so that these patients can be offered appropriate symptom management interventions.

  4. Transfer of conflict and cooperation from experienced games to new games: a connectionist model of learning

    PubMed Central

    Spiliopoulos, Leonidas

    2015-01-01

    The question of whether, and if so how, learning can be transfered from previously experienced games to novel games has recently attracted the attention of the experimental game theory literature. Existing research presumes that learning operates over actions, beliefs or decision rules. This study instead uses a connectionist approach that learns a direct mapping from game payoffs to a probability distribution over own actions. Learning is operationalized as a backpropagation rule that adjusts the weights of feedforward neural networks in the direction of increasing the probability of an agent playing a myopic best response to the last game played. One advantage of this approach is that it expands the scope of the model to any possible n × n normal-form game allowing for a comprehensive model of transfer of learning. Agents are exposed to games drawn from one of seven classes of games with significantly different strategic characteristics and then forced to play games from previously unseen classes. I find significant transfer of learning, i.e., behavior that is path-dependent, or conditional on the previously seen games. Cooperation is more pronounced in new games when agents are previously exposed to games where the incentive to cooperate is stronger than the incentive to compete, i.e., when individual incentives are aligned. Prior exposure to Prisoner's dilemma, zero-sum and discoordination games led to a significant decrease in realized payoffs for all the game classes under investigation. A distinction is made between superficial and deep transfer of learning both—the former is driven by superficial payoff similarities between games, the latter by differences in the incentive structures or strategic implications of the games. I examine whether agents learn to play the Nash equilibria of games, how they select amongst multiple equilibria, and whether they transfer Nash equilibrium behavior to unseen games. Sufficient exposure to a strategically heterogeneous set of games is found to be a necessary condition for deep learning (and transfer) across game classes. Paradoxically, superficial transfer of learning is shown to lead to better outcomes than deep transfer for a wide range of game classes. The simulation results corroborate important experimental findings with human subjects, and make several novel predictions that can be tested experimentally. PMID:25873855

  5. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  6. Improvement of electricity generating performance and life expectancy of MCFC stack by applying Li/Na carbonate electrolyte. Test results and analysis of 0.44 m 2/10 kW- and 1.03 m 2/10 kW-class stack

    NASA Astrophysics Data System (ADS)

    Yoshiba, Fumihiko; Morita, Hiroshi; Yoshikawa, Masahiro; Mugikura, Yoshihiro; Izaki, Yoshiyuki; Watanabe, Takao; Komoda, Mineo; Masuda, Yuji; Zaima, Nobuyuki

    Following the development of a 10 kW-class MCFC stack with a reactive area of 0.44 and 1.03 m 2, which applies a Li/Na carbonate electrolyte and a press stamping separator, many tests have now been carried out. In the installation tests, the observed cell voltages of the 0.44 m 2/10 kW-class stack agreed with the voltage predicted from the test results of the 100 cm 2 bench scale cell. This agreement proves that the installing procedure of the bench scale cell can be applied to the 0.44 m 2/10 kW-class stacks. The temperature distribution analysis model applied to the 100 kW-class stack was modified to calculate the temperature distribution of the 0.44 m 2/10 kW-class stack. Taking the heat loss and the heat transfer effect of the stack holder into account, the calculated temperature was close to the measured temperature; this result proves that the modification was adequate for the temperature analysis model. In the high current density operating tests on the 0.44 m 2/10 kW-class stack, an electrical power density of 2.46 kW/m 2 was recorded at an operating current density of 3000 A/m 2. In the endurance test on the 0.44 m 2/10 kW-class stack, however, unexpected Ni shortening occurred during the operating period 2500-4500 h, which had been caused by a defective formation of the electrolyte matrix. The shortening seems to have been caused by the crack, which appeared in the electrolyte matrix. The voltage degradation rate of the 0.44 m 2/10 kW-class stack was 0.52% over 1000 h, which proves that the matrix was inadequate for a long life expectancy of the MCFC stack. A final endurance test was carried out on the 1.03 m 2/10 kW-class stack, of which the matrix had been revised. The fuel utilisation and the leakage of anode gas never changed during the 10,000 h operating test. This result suggests that no shortening occurred during the 10,000 h endurance test. The cell voltage degradation rate was around 0.2-0.3% over 1000 h in the 1.03 m 2/10 kW-class stack. According to a comparison of the stack electricity generating performance of the 0.44 m 2 and the 1.03 m 2/10 kW-class stack under the same operating conditions, the performance of the 1.03 m 2 stack was lower at the beginning of the endurance test, however, its performance exceeded the performance of the 0.44 m 2/10 kW-class stack during the 10,000 h operating test. By carrying out the high current density operating test and the 10,000-hour endurance test using commercial sized 10 kW-class stacks, the stability of the MCFC stack with a Li/Na carbonate electrolyte and a press stamping separator has been proven.

  7. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  8. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  9. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  10. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  11. Photochemical escape of oxygen from Mars: constraints from MAVEN in situ measurements

    NASA Astrophysics Data System (ADS)

    Lillis, R. J.; Deighan, J.; Fox, J. L.; Bougher, S. W.; Lee, Y.; Cravens, T.; Rahmati, A.; Mahaffy, P. R.; Andersson, L.; Combi, M. R.; Benna, M.; Jakosky, B. M.; Gröller, H.

    2016-12-01

    One of the primary goals of the MAVEN mission is to characterize rates of atmospheric escape from Mars at the present epoch and relate those escape rates to solar drivers. Photochemical escape of oxygen is expected to be a significant channel for atmospheric loss, particularly in the early solar system when extreme ultraviolet (EUV) fluxes were much higher. We use near-periapsis (<400 km altitude) data from three instruments. The Langmuir Probe and Waves (LPW) instrument measures electron density and temperature, the Suprathermal And Thermal Ion Composition (STATIC) experiment measures ion temperature and the Neutral Gas and Ion Mass Spectrometer (NGIMS) measures neutral and ion densities. For each profile of in situ measurements, we make a series of calculations, each as a function of altitude. The first uses electron and ion temperatures to calculate the probability distribution for initial energies of hot O atoms. The second calculates the probability that a hot atom born at that altitude will escape. The third takes calculates the production rate of the hot O atoms. We then multiply together the profiles of hot atom production and escape probability to get profiles of the production rate of escaping atoms. We integrate with respect to altitude to give us the escape flux of hot oxygen atoms for that periapsis pass. We will present escape fluxes and derived escape rates from the first Mars year of data collected. Total photochemical loss over time is not very useful to calculate from such escape fluxes derived from current conditions because a thicker atmosphere and much higher solar EUV in the past may change the dynamics of escape dramatically. In the future, we intend to use 3-D Monte Carlo models of global atmospheric escape, in concert with our in situ and remote measurements, to fully characterize photochemical escape under current conditions and carefully extrapolate back in time using further simulations with new boundary conditions.

  12. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  13. Quantifying volcanic hazard at Campi Flegrei caldera (Italy) with uncertainty assessment: 2. Pyroclastic density current invasion maps

    NASA Astrophysics Data System (ADS)

    Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano

    2015-04-01

    Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.

  14. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  15. Operationalizing Max Weber's probability concept of class situation: the concept of social class.

    PubMed

    Smith, Ken

    2007-03-01

    In this essay I take seriously Max Weber's astonishingly neglected claim that class situation may be defined, not in categorial terms, but probabilistically. I then apply this idea to another equally neglected claim made by Weber that the boundaries of social classes may be determined by the degree of social mobility within such classes. Taking these two ideas together I develop the idea of a non-categorial boundary 'surface' between classes and of a social class 'corridor' made up of all those people who are still to be found within the boundaries of the social class into which they were born. I call social mobility within a social class 'intra-class social mobility' and social mobility between classes 'inter-class social mobility'. I also claim that this distinction resolves the dispute between those sociologists who claim that late industrial societies are still highly class bound and those who think that this is no longer the case. Both schools are right I think, but one is referring to a high degree of intra-class social mobility and the other to an equally high degree of inter-class mobility. Finally I claim that this essay provides sociology with only one example among many other possible applications of how probability theory might usefully be used to overcome boundary problems generally in sociology.

  16. A latent transition model of the effects of a teen dating violence prevention initiative.

    PubMed

    Williams, Jason; Miller, Shari; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah

    2015-02-01

    Patterns of physical and psychological teen dating violence (TDV) perpetration, victimization, and related behaviors were examined with data from the evaluation of the Start Strong: Building Healthy Teen Relationships initiative, a dating violence primary prevention program targeting middle school students. Latent class and latent transition models were used to estimate distinct patterns of TDV and related behaviors of bullying and sexual harassment in seventh grade students at baseline and to estimate transition probabilities from one pattern of behavior to another at the 1-year follow-up. Intervention effects were estimated by conditioning transitions on exposure to Start Strong. Latent class analyses suggested four classes best captured patterns of these interrelated behaviors. Classes were characterized by elevated perpetration and victimization on most behaviors (the multiproblem class), bullying perpetration/victimization and sexual harassment victimization (the bully-harassment victimization class), bullying perpetration/victimization and psychological TDV victimization (bully-psychological victimization), and experience of bully victimization (bully victimization). Latent transition models indicated greater stability of class membership in the comparison group. Intervention students were less likely to transition to the most problematic pattern and more likely to transition to the least problem class. Although Start Strong has not been found to significantly change TDV, alternative evaluation models may find important differences. Latent transition analysis models suggest positive intervention impact, especially for the transitions at the most and the least positive end of the spectrum. Copyright © 2015. Published by Elsevier Inc.

  17. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  18. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels.

  19. Settlement and post-settlement survival rates of the white seabream (Diplodus sargus) in the western Mediterranean Sea

    PubMed Central

    Cuadros, Amalia; Basterretxea, Gotzon; Cardona, Luis; Cheminée, Adrien; Hidalgo, Manuel

    2018-01-01

    Survival during the settlement window is a limiting variable for recruitment. The survival is believed to be strongly determined by biological interactions and sea conditions, however it has been poorly investigated. We examined the settlement patterns related to relevant biotic and abiotic factors (i.e. Density-dependence, wind stress, wave height and coastal current velocity) potentially determining post-settler survival rates of a coastal necto-benthic fish of wide distribution in the Mediterranean and eastern Atlantic, the white seabream (Diplodus sargus). An observational study of the demography of juveniles of this species was carried out at six coves in Menorca Island (Balearic Islands, western Mediterranean). Three of the coves were located in the northern and wind exposed coast, at the Northeast (NE) side; while the other three were found along the southern and sheltered coast, at the Southwest (SW) side of the island. The settlement period extended from early May to late June and maximum juvenile densities at the sampling sites varied between 5 and 11 ind. m-1 with maximum values observed in late May simultaneously occurring in the two coasts. Our analysis of juvenile survival, based on the interpretation of the observed patters using an individual based model (IBM), revealed two stages in the size-mortality relationships. An initial density-dependent stage was observed for juveniles up to 20 mm TL, followed by a density independent stage when other factors dominated the survival at sizes > 20 mm TL. No significant environmental effects were observed for the small size class (<20mm TL). Different significant environmental effects affecting NE and SW coves were observed for the medium (20-30mm TL) and large (>30mm TL) size class. In the NE, the wind stress consistently affected the density of fish of 20–30 mm and >30 mm TL with a dome-shape effect with higher densities at intermediate values of wind stress and negative effect at the extremes. The best models applied in the SW coves showed a significant non-linear negative effect on fish density that was also consistent for both groups 20–30 mm and >30 mm TL. Higher densities were observed at low values of wave height in the two groups. Because of these variations, the number of juveniles present at the end of the period was unrelated to their initial density and average survival varied among locations. In consequence, recruitment was (1) primarily limited by denso-dependient procedures at settlement stage, and (2) by sea conditions at post-settlement, where extreme wave conditions depleted juveniles. Accordingly, regional hydrodynamic conditions during the settlement season produced significant impacts on the juvenile densities depending on their size and with contrasted effects in respectto cove orientation. The similar strength in larval supply between coves, in addition to the similar mean phenology for settlers in the north and south of the Island, suggests that all fish may come from the same parental reproductive pool. These factors should be taken into account when assessing relationships between settlers, recruits and adults of white seabream. PMID:29324758

  20. [Dynamics of sap flow density in stems of typical desert shrub Calligonum mongolicum and its responses to environmental variables].

    PubMed

    Xu, Shi-qin; Ji, Xi-bin; Jin, Bo-wen

    2016-02-01

    Independent measurements of stem sap flow in stems of Calligonum mongolicum and environmental variables using commercial sap flow gauges and a micrometeorological monitoring system, respectively, were made to simulate the variation of sap flow density in the middle range of Hexi Corridor, Northwest China during June to September, 2014. The results showed that the diurnal process of sap flow density in C. mongolicum showed a broad unimodal change, and the maximum sap flow density reached about 30 minutes after the maximum of photosynthetically active radiation (PAR) , while about 120 minutes before the maximum of temperature and vapor pressure deficit (VPD). During the studying period, sap flow density closely related with atmosphere evapor-transpiration demand, and mainly affected by PAR, temperature and VPD. The model was developed which directly linked the sap flow density with climatic variables, and good correlation between measured and simulated sap flow density was observed in different climate conditions. The accuracy of simulation was significantly improved if the time-lag effect was taken into consideration, while this model underestimated low and nighttime sap flow densities, which was probably caused by plant physiological characteristics.

  1. 49 CFR 192.609 - Change in class location: Required study.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in class location: Required study. Whenever an increase in population density indicates a change in... account, for the segment of pipeline involved; and (f) The actual area affected by the population density...

  2. 49 CFR 192.609 - Change in class location: Required study.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in class location: Required study. Whenever an increase in population density indicates a change in... account, for the segment of pipeline involved; and (f) The actual area affected by the population density...

  3. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  4. Ecohydrology of agroecosystems: probabilistic description of yield reduction risk under limited water availability

    NASA Astrophysics Data System (ADS)

    Vico, Giulia; Porporato, Amilcare

    2013-04-01

    Supplemental irrigation represents one of the main strategies to mitigate the effects of climate variability and stabilize yields. Irrigated agriculture currently provides 40% of food production and its relevance is expected to further increase in the near future, in face of the projected alterations of rainfall patterns and increase in food, fiber, and biofuel demand. Because of the significant investments and water requirements involved in irrigation, strategic choices are needed to preserve productivity and profitability, while maintaining a sustainable water management - a nontrivial task given the unpredictability of the rainfall forcing. To facilitate decision making under uncertainty, a widely applicable probabilistic framework is proposed. The occurrence of rainfall events and irrigation applications are linked probabilistically to crop development during the growing season and yields at harvest. Based on these linkages, the probability density function of yields and corresponding probability density function of required irrigation volumes, as well as the probability density function of yields under the most common case of limited water availability are obtained analytically, as a function of irrigation strategy, climate, soil and crop parameters. The full probabilistic description of the frequency of occurrence of yields and water requirements is a crucial tool for decision making under uncertainty, e.g., via expected utility analysis. Furthermore, the knowledge of the probability density function of yield allows us to quantify the yield reduction hydrologic risk. Two risk indices are defined and quantified: the long-term risk index, suitable for long-term irrigation strategy assessment and investment planning, and the real-time risk index, providing a rigorous probabilistic quantification of the emergence of drought conditions during a single growing season in an agricultural setting. Our approach employs relatively few parameters and is thus easily and broadly applicable to different crops and sites, under current and future climate scenarios. Hence, the proposed probabilistic framework provides a quantitative tool to assess the impact of irrigation strategy and water allocation on the risk of not meeting a certain target yield, thus guiding the optimal allocation of water resources for human and environmental needs.

  5. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  6. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  7. A hazard and risk classification system for catastrophic rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Hermanns, R.; Oppikofer, T.; Anda, E.; Blikra, L. H.; Böhme, M.; Bunkholt, H.; Dahle, H.; Devoli, G.; Eikenæs, O.; Fischer, L.; Harbitz, C. B.; Jaboyedoff, M.; Loew, S.; Yugsi Molina, F. X.

    2012-04-01

    The Geological Survey of Norway carries out systematic geologic mapping of potentially unstable rock slopes in Norway that can cause a catastrophic failure. As catastrophic failure we describe failures that involve substantial fragmentation of the rock mass during run-out and that impact an area larger than that of a rock fall (shadow angle of ca. 28-32° for rock falls). This includes therefore rock slope failures that lead to secondary effects, such as a displacement wave when impacting a water body or damming of a narrow valley. Our systematic mapping revealed more than 280 rock slopes with significant postglacial deformation, which might represent localities of large future rock slope failures. This large number necessitates prioritization of follow-up activities, such as more detailed investigations, periodic monitoring and permanent monitoring and early-warning. In the past hazard and risk were assessed qualitatively for some sites, however, in order to compare sites so that political and financial decisions can be taken, it was necessary to develop a quantitative hazard and risk classification system. A preliminary classification system was presented and discussed with an expert group of Norwegian and international experts and afterwards adapted following their recommendations. This contribution presents the concept of this final hazard and risk classification that should be used in Norway in the upcoming years. Historical experience and possible future rockslide scenarios in Norway indicate that hazard assessment of large rock slope failures must be scenario-based, because intensity of deformation and present displacement rates, as well as the geological structures activated by the sliding rock mass can vary significantly on a given slope. In addition, for each scenario the run-out of the rock mass has to be evaluated. This includes the secondary effects such as generation of displacement waves or landslide damming of valleys with the potential of later outburst floods. It became obvious that large rock slope failures cannot be evaluated on a slope scale with frequency analyses of historical and prehistorical events only, as multiple rockslides have occurred within one century on a single slope that prior to the recent failures had been inactive for several thousand years. In addition, a systematic analysis on temporal distribution indicates that rockslide activity following deglaciation after the Last Glacial Maximum has been much higher than throughout the Holocene. Therefore the classification system has to be based primarily on the geological conditions on the deforming slope and on the deformation rates and only to a lesser weight on a frequency analyses. Our hazard classification therefore is primarily based on several criteria: 1) Development of the back-scarp, 2) development of the lateral release surfaces, 3) development of the potential basal sliding surface, 4) morphologic expression of the basal sliding surface, 5) kinematic feasibility tests for different displacement mechanisms, 6) landslide displacement rates, 7) change of displacement rates (acceleration), 8) increase of rockfall activity on the unstable rock slope, 9) Presence post-glacial events of similar size along the affected slope and its vicinity. For each of these criteria several conditions are possible to choose from (e.g. different velocity classes for the displacement rate criterion). A score is assigned to each condition and the sum of all scores gives the total susceptibility score. Since many of these observations are somewhat uncertain, the classification system is organized in a decision tree where probabilities can be assigned to each condition. All possibilities in the decision tree are computed and the individual probabilities giving the same total score are summed. Basic statistics show the minimum and maximum total scores of a scenario, as well as the mean and modal value. The final output is a cumulative frequency distribution of the susceptibility scores that can be divided into several classes, which are interpreted as susceptibility classes (very high, high, medium, low, and very low). Today the Norwegian Planning and Building Act uses hazard classes with annual probabilities of impact on buildings producing damages (<1/100, <1/1000, <1/5000 and zero for critical buildings). However, up to now there is not enough scientific knowledge to predict large rock slope failures in these strict classes. Therefore, the susceptibility classes will be matched with the hazard classes from the Norwegian Building Act (e.g. very high susceptibility represents the hazard class with annual probability >1/100). The risk analysis focuses on the potential fatalities of a worst case rock slide scenario and its secondary effects only and is done in consequence classes with a decimal logarithmic scale. However we recommend for all high risk objects that municipalities carry out detailed risk analyses. Finally, the hazard and risk classification system will give recommendations where surveillance in form of continuous 24/7 monitoring systems coupled with early-warning systems (high risk class) or periodic monitoring (medium risk class) should be carried out. These measures are understood as to reduce the risk of life loss due to a rock slope failure close to 0 as population can be evacuated on time if a change of stability situation occurs. The final hazard and risk classification for all potentially unstable rock slopes in Norway, including all data used for its classification will be published within the national landslide database (available on www.skrednett.no).

  8. Finite entanglement entropy and spectral dimension in quantum gravity

    NASA Astrophysics Data System (ADS)

    Arzano, Michele; Calcagni, Gianluca

    2017-12-01

    What are the conditions on a field theoretic model leading to a finite entanglement entropy density? We prove two very general results: (1) Ultraviolet finiteness of a theory does not guarantee finiteness of the entropy density; (2) If the spectral dimension of the spatial boundary across which the entropy is calculated is non-negative at all scales, then the entanglement entropy cannot be finite. These conclusions, which we verify in several examples, negatively affect all quantum-gravity models, since their spectral dimension is always positive. Possible ways out are considered, including abandoning the definition of the entanglement entropy in terms of the boundary return probability or admitting an analytic continuation (not a regularization) of the usual definition. In the second case, one can get a finite entanglement entropy density in multi-fractional theories and causal dynamical triangulations.

  9. Anisotropic electrical conduction and reduction in dangling-bond density for polycrystalline Si films prepared by catalytic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Niikura, Chisato; Masuda, Atsushi; Matsumura, Hideki

    1999-07-01

    Polycrystalline Si (poly-Si) films with high crystalline fraction and low dangling-bond density were prepared by catalytic chemical vapor deposition (Cat-CVD), often called hot-wire CVD. Directional anisotropy in electrical conduction, probably due to structural anisotropy, was observed for Cat-CVD poly-Si films. A novel method to separately characterize both crystalline and amorphous phases in poly-Si films using anisotropic electrical conduction was proposed. On the basis of results obtained by the proposed method and electron spin resonance measurements, reduction in dangling-bond density for Cat-CVD poly-Si films was achieved using the condition to make the quality of the included amorphous phase high. The properties of Cat-CVD poly-Si films are found to be promising in solar-cell applications.

  10. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  11. Evaluation of drought using SPEI drought class transitions and log-linear models for different agro-ecological regions of India

    NASA Astrophysics Data System (ADS)

    Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.

    2017-08-01

    Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.

  12. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  13. Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.

    PubMed

    She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng

    2015-01-01

    Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.

  14. Analyzing time-ordered event data with missed observations.

    PubMed

    Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P

    2017-09-01

    A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p  = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.

  15. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  16. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  17. On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type

    NASA Astrophysics Data System (ADS)

    Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre

    2016-09-01

    We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.

  18. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  19. Comparative Study of Teachers in Regular Schools and Teachers in Specialized Schools in France, Working with Students with an Autism Spectrum Disorder: Stress, Social Support, Coping Strategies and Burnout.

    PubMed

    Boujut, Emilie; Dean, Annika; Grouselle, Amélie; Cappe, Emilie

    2016-09-01

    The inclusion of students with Autism Spectrum Disorder (ASD) in schools is a source of stress for teachers. Specialized teachers have, in theory, received special training. To compare the experiences of teachers dealing with students with ASD in different classroom environments. A total of 245 teachers filled out four self-report questionnaires measuring perceived stress, social support, coping strategies, and burnout. Specialized teachers perceive their teaching as a challenge, can count on receiving help from colleagues, use more problem-focused coping strategies and social support seeking behavior, and are less emotionally exhausted than teachers in regular classes. This study highlights that teachers in specialized schools and classes have better adjustment, probably due to their training, experience, and tailored classroom conditions.

  20. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  1. Integrated Bayesian models of learning and decision making for saccadic eye movements.

    PubMed

    Brodersen, Kay H; Penny, Will D; Harrison, Lee M; Daunizeau, Jean; Ruff, Christian C; Duzel, Emrah; Friston, Karl J; Stephan, Klaas E

    2008-11-01

    The neurophysiology of eye movements has been studied extensively, and several computational models have been proposed for decision-making processes that underlie the generation of eye movements towards a visual stimulus in a situation of uncertainty. One class of models, known as linear rise-to-threshold models, provides an economical, yet broadly applicable, explanation for the observed variability in the latency between the onset of a peripheral visual target and the saccade towards it. So far, however, these models do not account for the dynamics of learning across a sequence of stimuli, and they do not apply to situations in which subjects are exposed to events with conditional probabilities. In this methodological paper, we extend the class of linear rise-to-threshold models to address these limitations. Specifically, we reformulate previous models in terms of a generative, hierarchical model, by combining two separate sub-models that account for the interplay between learning of target locations across trials and the decision-making process within trials. We derive a maximum-likelihood scheme for parameter estimation as well as model comparison on the basis of log likelihood ratios. The utility of the integrated model is demonstrated by applying it to empirical saccade data acquired from three healthy subjects. Model comparison is used (i) to show that eye movements do not only reflect marginal but also conditional probabilities of target locations, and (ii) to reveal subject-specific learning profiles over trials. These individual learning profiles are sufficiently distinct that test samples can be successfully mapped onto the correct subject by a naïve Bayes classifier. Altogether, our approach extends the class of linear rise-to-threshold models of saccadic decision making, overcomes some of their previous limitations, and enables statistical inference both about learning of target locations across trials and the decision-making process within trials.

  2. Influence of primitive Biłgoraj horses on the glossy buckthorn (Frangula alnus)-dominated understory in a mixed coniferous forest

    NASA Astrophysics Data System (ADS)

    Klich, Daniel

    2018-02-01

    Changes in the understory dominated by glossy buckthorn Frangula alnus via the influence of primitive horses were analyzed in a 28-year-old enclosure in the village of Szklarnia at the Biłgoraj Horse-Breeding Centre near Janów Lubelski (eastern Poland). The analysis was conducted in 20 circular plots (30 m2) defined in adjacent, similar forest stands (enclosed and control). Disturbance by the horses, mainly through trampling, caused numerous paths to form within the glossy buckthorn-dominated understory and led to a decrease in density of stems of lower height classes (30-80 and 81-130 cm, respectively). An increase in species diversity at the expense of glossy buckthorn density was also observed. The horses' trampling caused an increase in Padus avium density and the encroachment of other woody plant species that were less shade-tolerant and grew well in soils rich in nutrients. An increase in the density of woody plants over 180 cm above ground was observed within the enclosure, which was probably the result of the horses' excretion of feces. The results presented here provide new insight into the ecological role that horses play in forest-meadow landscape mosaics, which, via altering the development of vegetation, may contribute to an increase in biodiversity within forest habitats.

  3. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  4. Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.

    2016-12-01

    The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.

  5. Consumer-directed health care and the disadvantaged.

    PubMed

    Bloche, M Gregg

    2007-01-01

    Broad adoption of "consumer-directed health care" would probably widen socioeconomic disparities in care and redistribute wealth in "reverse Robin Hood" fashion, from the working poor and middle classes to the well-off. Racial and ethnic disparities in care would also probably worsen. These effects could be alleviated by adjustments to the consumer-directed paradigm. Possible fixes include more progressive tax subsidies, tiering of cost-sharing schemes to promote high-value care, and reduced cost sharing for the less well-off. These fixes, though, are unlikely to gain traction. If consumer-directed plans achieve market dominance, disparities in care by class and race will probably grow.

  6. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  7. Analysis and design of asymmetrical reluctance machine

    NASA Astrophysics Data System (ADS)

    Harianto, Cahya A.

    Over the past few decades the induction machine has been chosen for many applications due to its structural simplicity and low manufacturing cost. However, modest torque density and control challenges have motivated researchers to find alternative machines. The permanent magnet synchronous machine has been viewed as one of the alternatives because it features higher torque density for a given loss than the induction machine. However, the assembly and permanent magnet material cost, along with safety under fault conditions, have been concerns for this class of machine. An alternative machine type, namely the asymmetrical reluctance machine, is proposed in this work. Since the proposed machine is of the reluctance machine type, it possesses desirable feature, such as near absence of rotor losses, low assembly cost, low no-load rotational losses, modest torque ripple, and rather benign fault conditions. Through theoretical analysis performed herein, it is shown that this machine has a higher torque density for a given loss than typical reluctance machines, although not as high as the permanent magnet machines. Thus, the asymmetrical reluctance machine is a viable and advantageous machine alternative where the use of permanent magnet machines are undesirable.

  8. Tropical insular fish assemblages are resilient to flood disturbance

    USGS Publications Warehouse

    Smith, William E.; Kwak, Thomas J.

    2015-01-01

    Periods of stable environmental conditions, favoring development of ecological communities regulated by density-dependent processes, are interrupted by random periods of disturbance that may restructure communities. Disturbance may affect populations via habitat alteration, mortality, or displacement. We quantified fish habitat conditions, density, and movement before and after a major flood disturbance in a Caribbean island tropical river using habitat surveys, fish sampling and population estimates, radio telemetry, and passively monitored PIT tags. Native stream fish populations showed evidence of acute mortality and downstream displacement of surviving fish. All fish species were reduced in number at most life stages after the disturbance, but populations responded with recruitment and migration into vacated upstream habitats. Changes in density were uneven among size classes for most species, indicating altered size structures. Rapid recovery processes at the population level appeared to dampen effects at the assemblage level, as fish assemblage parameters (species richness and diversity) were unchanged by the flooding. The native fish assemblage appeared resilient to flood disturbance, rapidly compensating for mortality and displacement with increased recruitment and recolonization of upstream habitats.

  9. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  10. Does highly symptomatic class membership in the acute phase predict highly symptomatic classification in victims 6 months after traumatic exposure?

    PubMed

    Hansen, Maj; Hyland, Philip; Armour, Cherie

    2016-05-01

    Recently studies have indicated the existence of both posttraumatic stress disorder (PTSD) and acute stress disorder (ASD) subtypes but no studies have investigated their mutual association. Although ASD may not be a precursor of PTSD per se, there are potential benefits associated with early identification of victims at risk of developing PTSD subtypes. The present study investigates ASD and PTSD subtypes using latent class analysis (LCA) following bank robbery (N=371). Moreover, we assessed if highly symptomatic ASD and selected risk factors increased the probability of highly symptomatic PTSD. The results of LCA revealed a three class solution for ASD and a two class solution for PTSD. Negative cognitions about self (OR=1.08), neuroticism (OR=1.09) and membership of the 'High symptomatic ASD' class (OR=20.41) significantly increased the probability of 'symptomatic PTSD' class membership. Future studies are needed to investigate the existence of ASD and PTSD subtypes and their mutual relationship. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Dissipative Particle Dynamics at Isothermal, Isobaric Conditions Using Shardlow-Like Splitting Algorithms

    DTIC Science & Technology

    2013-09-01

    probability density                                         Tk W p VPu m Tk W p VPH p,V i i ij CG ij i

  12. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Anthony M.; Williams, Liliya L.R.; Hjorth, Jens, E-mail: amyoung@astro.umn.edu, E-mail: llrw@astro.umn.edu, E-mail: jens@dark-cosmology.dk

    One usually thinks of a radial density profile as having a monotonically changing logarithmic slope, such as in NFW or Einasto profiles. However, in two different classes of commonly used systems, this is often not the case. These classes exhibit non-monotonic changes in their density profile slopes which we call oscillations for short. We analyze these two unrelated classes separately. Class 1 consists of systems that have density oscillations and that are defined through their distribution function f ( E ), or differential energy distribution N ( E ), such as isothermal spheres, King profiles, or DARKexp, a theoretically derivedmore » model for relaxed collisionless systems. Systems defined through f ( E ) or N ( E ) generally have density slope oscillations. Class 1 system oscillations can be found at small, intermediate, or large radii but we focus on a limited set of Class 1 systems that have oscillations in the central regions, usually at log( r / r {sub −2}) ∼< −2, where r {sub −2} is the largest radius where d log(ρ)/ d log( r ) = −2. We show that the shape of their N ( E ) can roughly predict the amplitude of oscillations. Class 2 systems which are a product of dynamical evolution, consist of observed and simulated galaxies and clusters, and pure dark matter halos. Oscillations in the density profile slope seem pervasive in the central regions of Class 2 systems. We argue that in these systems, slope oscillations are an indication that a system is not fully relaxed. We show that these oscillations can be reproduced by small modifications to N ( E ) of DARKexp. These affect a small fraction of systems' mass and are confined to log( r / r {sub −2}) ∼< 0. The size of these modifications serves as a potential diagnostic for quantifying how far a system is from being relaxed.« less

  14. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  15. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  16. Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.

    PubMed

    Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe

    2013-04-01

    Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.

  17. Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    Artyushenko, V. M.; Volovach, V. I.

    2018-01-01

    We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.

  18. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  19. The effect of magnetic field on RbCl quantum pseudodot qubit

    NASA Astrophysics Data System (ADS)

    Xiao, Jing-Lin

    2015-07-01

    Under the condition of strong electron-LO-phonon coupling in a RbCl quantum pseudodot (QPD) with an applied magnetic field (MF), the eigenenergies and the eigenfunctions of the ground and the first excited states (GFES) are obtained by using a variational method of the Pekar type (VMPT). A single qubit can be realized in this two-level quantum system. The electron’s probability density oscillates in the RbCl QPD with a certain period of T0 = 7.933 fs when the electron is in the superposition state of the GFES. The results indicate that due to the presence of the asymmetrical structure in the z direction of the RbCl QPD, the electron’s probability density shows double-peak configuration, whereas there is only peak if the confinement is a symmetric structure in the x and y directions of the RbCl QPD. The oscillating period is an increasing function of the cyclotron frequency and the polaron radius, whereas it is a decreasing one of the chemical potential of the two-dimensional electron gas and the zero point of the pseudoharmonic potential (PP).

  20. Constructor theory of probability

    PubMed Central

    2016-01-01

    Unitary quantum theory, having no Born Rule, is non-probabilistic. Hence the notorious problem of reconciling it with the unpredictability and appearance of stochasticity in quantum measurements. Generalizing and improving upon the so-called ‘decision-theoretic approach’, I shall recast that problem in the recently proposed constructor theory of information—where quantum theory is represented as one of a class of superinformation theories, which are local, non-probabilistic theories conforming to certain constructor-theoretic conditions. I prove that the unpredictability of measurement outcomes (to which constructor theory gives an exact meaning) necessarily arises in superinformation theories. Then I explain how the appearance of stochasticity in (finitely many) repeated measurements can arise under superinformation theories. And I establish sufficient conditions for a superinformation theory to inform decisions (made under it) as if it were probabilistic, via a Deutsch–Wallace-type argument—thus defining a class of decision-supporting superinformation theories. This broadens the domain of applicability of that argument to cover constructor-theory compliant theories. In addition, in this version some of the argument's assumptions, previously construed as merely decision-theoretic, follow from physical properties expressed by constructor-theoretic principles. PMID:27616914

  1. Clustering the Orion B giant molecular cloud based on its molecular emission

    PubMed Central

    Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H.; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R.; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal

    2017-01-01

    Context Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). Aims We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. Methods We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. Results A clustering analysis based only on the J = 1 – 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes (nH ~ 100 cm−3, ~ 500 cm−3, and > 1000 cm−3), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 − 0 line of HCO+ and the N = 1 − 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO+ and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO+ intensity ratio in UV-illuminated regions. Finer distinctions in density classes (nH ~ 7 × 103 cm−3 ~ 4 × 104 cm−3) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO+ (1 – 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Conclusions Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers. PMID:29456256

  2. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  3. Occurrence of organic wastewater compounds in drinking water, wastewater effluent, and the Big Sioux River in or near Sioux Falls, South Dakota, 2001-2004

    USGS Publications Warehouse

    Sando, Steven K.; Furlong, Edward T.; Gray, James L.; Meyer, Michael T.

    2006-01-01

    The U.S. Geological Survey (USGS) in cooperation with the city of Sioux Falls conducted several rounds of sampling to determine the occurrence of organic wastewater compounds (OWCs) in the city of Sioux Falls drinking water and waste-water effluent, and the Big Sioux River in or near Sioux Falls during August 2001 through May 2004. Water samples were collected during both base-flow and storm-runoff conditions. Water samples were collected at 8 sites, which included 4 sites upstream from the wastewater treatment plant (WWTP) discharge, 2 sites downstream from the WWTP discharge, 1 finished drinking-water site, and 1 WWTP effluent (WWE) site. A total of 125 different OWCs were analyzed for in this study using five different analytical methods. Analyses for OWCs were performed at USGS laboratories that are developing and/or refining small-concentration (less than 1 microgram per liter (ug/L)) analytical methods. The OWCs were classified into six compound classes: human pharmaceutical compounds (HPCs); human and veterinary antibiotic compounds (HVACs); major agricultural herbicides (MAHs); household, industrial,and minor agricultural compounds (HIACs); polyaromatic hydrocarbons (PAHs); and sterol compounds (SCs). Some of the compounds in the HPC, MAH, HIAC, and PAH classes are suspected of being endocrine-disrupting compounds (EDCs). Of the 125 different OWCs analyzed for in this study, 81 OWCs had one or more detections in environmental samples reported by the laboratories, and of those 81 OWCs, 63 had acceptable analytical method performance, were detected at concentrations greater than the study reporting levels, and were included in analyses and discussion related to occurrence of OWCs in drinking water, wastewater effluent, and the Big Sioux River. OWCs in all compound classes were detected in water samples from sampling sites in the Sioux Falls area. For the five sampling periods when samples were collected from the Sioux Falls finished drinking water, only one OWC was detected at a concentration greater than the study reporting level (metolachlor; 0.0040 ug/L). During base-flow conditions, Big Sioux River sites upstream from the WWTP discharge had OWC contributions that primarily were from nonpoint animal or crop agriculture sources or had OWC concentrations that were minimal. The influence of the WWTP discharge on OWCs at downstream river sites during base-flow conditions ranged from minimal influence to substantial influence depending on the sampling period. During runoff conditions, OWCs at sites upstream from the WWTP discharge probably were primarily contributed by nonpoint animal and/or crop agriculture sources and possibly by stormwater runoff from nearby roads. OWCs at sites downstream from the WWTP discharge probably were contributed by sources other than the WWTP effluent discharge, such as stormwater runoff from urban and/or agriculture areas and/or resuspension of OWCs adsorbed to sediment deposited in the Big Sioux River. OWC loads generally were substantially smaller for upstream sites than downstream sites during both base-flow and runoff conditions.discharge had OWC contributions that primarily were from nonpoint animal or crop agriculture sources or had OWC concentrations that were minimal. The influence of the WWTP discharge on OWCs at downstream river sites during base-flow conditions ranged from minimal influence to substantial influence depending on the sampling period. During runoff conditions, OWCs at sites upstream from the WWTP discharge probably were primarily contributed by nonpoint animal and/or crop agriculture sources and possibly by stormwater runoff from nearby roads. OWCs at sites downstream from the WWTP discharge probably were contributed by sources other than the WWTP effluent discharge, such as stormwater runoff from urban and/or agriculture areas and/or resuspension of OWCs adsorbed to sediment deposited in the Big Sioux River. OWC loads generally were substantially smaller for

  4. A generalized electron energy probability function for inductively coupled plasmas under conditions of nonlocal electron kinetics

    NASA Astrophysics Data System (ADS)

    Mouchtouris, S.; Kokkoris, G.

    2018-01-01

    A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.

  5. A description of discrete internal representation schemes for visual pattern discrimination.

    PubMed

    Foster, D H

    1980-01-01

    A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.

  6. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  7. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  8. Delirium superimposed on dementia: defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains.

    PubMed

    Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane

    2011-12-01

    The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.

  9. Skin Texture Recognition using Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Munshi, Anindita; Parekh, Ranjan

    2010-10-01

    This paper proposes an automated system for recognizing disease conditions of human skin in context to medical diagnosis. The disease conditions are recognized by analyzing skin texture images using a set of normalized symmetrical Grey Level Co occurrence Matrices (GLCM). GLCM defines the probability of grey level i occurring in the neighborhood of another grey level j at a distance d in directionθ. Directional GLCMs are computed along four directions: horizontal (θ = 0°), vertical (θ = 90°), right diagonal (θ = 45°) and left diagonal (θ = 135°), and a set of features viz. Contrast, Homogeneity and Energy computed from each, are averaged to provide an estimation of the texture class. The system is tested using 225 images pertaining to three dermatological skin conditions viz. dermatitis, eczema, urticaria. An accuracy of 94.81% is obtained using a multilayer perceptron (MLP) as a classifier.

  10. Distribution of chirality in the quantum walk: Markov process and entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanelli, Alejandro

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible tomore » find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.« less

  11. A translational velocity command system for VTOL low speed flight

    NASA Technical Reports Server (NTRS)

    Merrick, V. K.

    1982-01-01

    A translational velocity flight controller, suitable for very low speed maneuvering, is described and its application to a large class of VTOL aircraft from jet lift to propeller driven types is analyzed. Estimates for the more critical lateral axis lead to the conclusion that the controller would provide a jet lift (high disk loading) VTOL aircraft with satisfactory "hands off" station keeping in operational conditions more stringent than any specified in current or projected requirements. It also seems likely that ducted fan or propeller driven (low disk loading) VTOL aircraft would have acceptable hovering handling qualities even in high turbulence, although in these conditions pilot intervention to maintain satisfactory station keeping would probably be required for landing in restricted areas.

  12. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  13. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  14. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  15. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, K.S.

    The presence of overpopulation or unsustainable population growth may place pressure on the food and water supplies of countries in sensitive areas of the world. Severe air or water pollution may place additional pressure on these resources. These pressures may generate both internal and international conflict in these areas as nations struggle to provide for their citizens. Such conflicts may result in United States intervention, either unilaterally, or through the United Nations. Therefore, it is in the interests of the United States to identify potential areas of conflict in order to properly train and allocate forces. The purpose of thismore » research is to forecast the probability of conflict in a nation as a function of it s environmental conditions. Probit, logit and ordered probit models are employed to forecast the probability of a given level of conflict. Data from 95 countries are used to estimate the models. Probability forecasts are generated for these 95 nations. Out-of sample forecasts are generated for an additional 22 nations. These probabilities are then used to rank nations from highest probability of conflict to lowest. The results indicate that the dependence of a nation`s economy on agriculture, the rate of deforestation, and the population density are important variables in forecasting the probability and level of conflict. These results indicate that environmental variables do play a role in generating or exacerbating conflict. It is unclear that the United States military has any direct role in mitigating the environmental conditions that may generate conflict. A more important role for the military is to aid in data gathering to generate better forecasts so that the troops are adequntely prepared when conflicts arises.« less

  17. Active Learning? Not with My Syllabus!

    ERIC Educational Resources Information Center

    Ernst, Michael D.

    2012-01-01

    We describe an approach to teaching probability that minimizes the amount of class time spent on the topic while also providing a meaningful (dice-rolling) activity to get students engaged. The activity, which has a surprising outcome, illustrates the basic ideas of informal probability and how probability is used in statistical inference.…

  18. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  19. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  20. Background Noise Characteristics in the Western Part of Romania

    NASA Astrophysics Data System (ADS)

    Grecu, B.; Neagoe, C.; Tataru, D.; Stuart, G.

    2012-04-01

    The seismological database of the western part of Romania increased significantly during the last years, when 33 broadband seismic stations provided by SEIS-UK (10 CMG 40 T's - 30 s, 9 CMG 3T's - 120 s, 14 CMG 6T's - 30 s) were deployed in the western part of the country in July 2009 to operate autonomously for two years. These stations were installed within a joint project (South Carpathian Project - SCP) between University of Leeds, UK and National Institute for Earth Physics (NIEP), Romania that aimed at determining the lithospheric structure and geodynamical evolution of the South Carpathian Orogen. The characteristics of the background seismic noise recorded at the SCP broadband seismic network have been studied in order to identify the variations in background seismic noise as a function of time of day, season, and particular conditions at the stations. Power spectral densities (PSDs) and their corresponding probability density functions (PDFs) are used to characterize the background seismic noise. At high frequencies (> 1 Hz), seismic noise seems to have cultural origin, since notable variations between daytime and nighttime noise levels are observed at most of the stations. The seasonal variations are seen in the microseisms band. The noise levels increase during the winter and autumn months and decrease in summer and spring seasons, while the double-frequency peak shifts from lower periods in summer to longer periods in winter. The analysis of the probability density functions for stations located in different geologic conditions points out that the noise level is higher for stations sited on softer formations than those sited on hard rocks. Finally, the polarization analysis indicates that the main sources of secondary microseisms are found in the Mediterranean Sea and Atlantic Ocean.

Top