Sample records for normal probability density

  1. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  2. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  3. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  4. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  5. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  7. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  8. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  9. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  10. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  11. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  12. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  13. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  14. An adaptive density-based routing protocol for flying Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xueli; Qi, Qian; Wang, Qingwen; Li, Yongqiang

    2017-10-01

    An Adaptive Density-based Routing Protocol (ADRP) for Flying Ad Hoc Networks (FANETs) is proposed in this paper. The main objective is to calculate forwarding probability adaptively in order to increase the efficiency of forwarding in FANETs. ADRP dynamically fine-tunes the rebroadcasting probability of a node for routing request packets according to the number of neighbour nodes. Indeed, it is more interesting to privilege the retransmission by nodes with little neighbour nodes. We describe the protocol, implement it and evaluate its performance using NS-2 network simulator. Simulation results reveal that ADRP achieves better performance in terms of the packet delivery fraction, average end-to-end delay, normalized routing load, normalized MAC load and throughput, which is respectively compared with AODV.

  15. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  16. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  17. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  18. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  19. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  20. HIGH STAR FORMATION RATES IN TURBULENT ATOMIC-DOMINATED GAS IN THE INTERACTING GALAXIES IC 2163 AND NGC 2207

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G.; Kaufman, Michele; Bournaud, Frédéric

    CO observations of the interacting galaxies IC 2163 and NGC 2207 are combined with HI, H α , and 24 μ m observations to study the star formation rate (SFR) surface density as a function of the gas surface density. More than half of the high-SFR regions are HI dominated. When compared to other galaxies, these HI-dominated regions have excess SFRs relative to their molecular gas surface densities but normal SFRs relative to their total gas surface densities. The HI-dominated regions are mostly located in the outer part of NGC 2207 where the HI velocity dispersion is high, 40–50 kmmore » s{sup −1}. We suggest that the star-forming clouds in these regions have envelopes at lower densities than normal, making them predominantly atomic, and cores at higher densities than normal because of the high turbulent Mach numbers. This is consistent with theoretical predictions of a flattening in the density probability distribution function for compressive, high Mach number turbulence.« less

  1. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  2. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  3. Scintillation statistics measured in an earth-space-earth retroreflector link

    NASA Technical Reports Server (NTRS)

    Bufton, J. L.

    1977-01-01

    Scintillation was measured in a vertical path from a ground-based laser transmitter to the Geos 3 satellite and back to a ground-based receiver telescope and, the experimental results were compared with analytical results presented in a companion paper (Bufton, 1977). The normalized variance, the probability density function and the power spectral density of scintillation were all measured. Moments of the satellite scintillation data in terms of normalized variance were lower than expected. The power spectrum analysis suggests that there were scintillation components at frequencies higher than the 250 Hz bandwidth available in the experiment.

  4. Dynamical Epidemic Suppression Using Stochastic Prediction and Control

    DTIC Science & Technology

    2004-10-28

    initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron

  5. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  7. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  8. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  9. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  10. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  11. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  12. Binary data corruption due to a Brownian agent

    NASA Astrophysics Data System (ADS)

    Newman, T. J.; Triampo, Wannapong

    1999-05-01

    We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.

  13. Analysis of Electronic Densities and Integrated Doses in Multiform Glioblastomas Stereotactic Radiotherapy

    NASA Astrophysics Data System (ADS)

    Barón-Aznar, C.; Moreno-Jiménez, S.; Celis, M. A.; Lárraga-Gutiérrez, J. M.; Ballesteros-Zebadúa, P.

    2008-08-01

    Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScansoftware, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.

  14. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  15. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  16. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  17. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  18. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  19. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  20. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  1. TRPM7 Is Required for Normal Synapse Density, Learning, and Memory at Different Developmental Stages.

    PubMed

    Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat

    2018-06-19

    The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  2. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  3. Diffuse reflection from a stochastically bounded, semi-infinite medium

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Peltoniemi, J. I.; Irvine, W. M.

    1990-01-01

    In order to determine the diffuse reflection from a medium bounded by a rough surface, the problem of radiative transfer in a boundary layer characterized by a statistical distribution of heights is considered. For the case that the surface is defined by a multivariate normal probability density, the propagation probability for rays traversing the boundary layer is derived and, from that probability, a corresponding radiative transfer equation. A solution of the Eddington (two stream) type is found explicitly, and examples are given. The results should be applicable to reflection from the regoliths of solar system bodies, as well as from a rough ocean surface.

  4. Analysis of Electronic Densities and Integrated Doses in Multiform Glioblastomas Stereotactic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.

  5. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  6. Statistical process control for residential treated wood

    Treesearch

    Patricia K. Lebow; Timothy M. Young; Stan Lebow

    2017-01-01

    This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...

  7. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  8. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  9. In vivo NMR imaging of sodium-23 in the human head.

    PubMed

    Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K

    1985-01-01

    We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.

  10. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  11. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  12. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  13. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  14. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  15. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  16. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    PubMed

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The analysis also showed that a satisfactory level of performance with a small probability of failure was produced for the standard practice design of waste landfills as well as an analysis scenario immediately after the landfill closure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Passive microrheology of normal and cancer cells after ML7 treatment by atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyapunova, Elena, E-mail: lyapunova@icmm.ru; Ural Federal University, Kuibyishev Str. 48, Ekaterinburg, 620000; Nikituk, Alexander, E-mail: nas@icmm.ru

    Mechanical properties of living cancer and normal thyroidal cells were investigated by atomic force microscopy (AFM). Cell mechanics was compared before and after treatment with ML7, which is known to reduce myosin activity and induce softening of cell structures. We recorded force curves with extended dwell time of 6 seconds in contact at maximum forces from 500 pN to 1 nN. Data were analyzed within different frameworks: Hertz fit was applied in order to evaluate differences in Young’s moduli among cell types and conditions, while the fluctuations of the cantilever in contact with cells were analyzed with both conventional algorithmsmore » (probability density function and power spectral density) and multifractal detrended fluctuation analysis (MF-DFA). We found that cancer cells were softer than normal cells and ML7 had a substantial softening effect on normal cells, but only a marginal one on cancer cells. Moreover, we observed that all recorded signals for normal and cancer cells were monofractal with small differences between their scaling parameters. Finally, the applicability of wavelet-based methods of data analysis for the discrimination of different cell types is discussed.« less

  18. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  19. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  20. Probability density functions for CP-violating rephasing invariants

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2018-05-01

    The implications of the anarchy principle on CP violation in the lepton sector are investigated. A systematic method is introduced to compute the probability density functions for the CP-violating rephasing invariants of the PMNS matrix from the Haar measure relevant to the anarchy principle. Contrary to the CKM matrix which is hierarchical, it is shown that the Haar measure, and hence the anarchy principle, are very likely to lead to the observed PMNS matrix. Predictions on the CP-violating Dirac rephasing invariant |jD | and Majorana rephasing invariant |j1 | are also obtained. They correspond to 〈 |jD | 〉 Haar = π / 105 ≈ 0.030 and 〈 |j1 | 〉 Haar = 1 / (6 π) ≈ 0.053 respectively, in agreement with the experimental hint from T2K of | jDexp | ≈ 0.032 ± 0.005 (or ≈ 0.033 ± 0.003) for the normal (or inverted) hierarchy.

  1. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  2. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  3. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  4. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  5. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  6. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning.

    PubMed

    Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y

    2007-03-07

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  7. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  8. New Concepts in the Evaluation of Biodegradation/Persistence of Chemical Substances Using a Microbial Inoculum

    PubMed Central

    Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han

    2011-01-01

    The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143

  9. VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.

    2017-05-01

    The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).

  10. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  12. Muscle categorization using PDF estimation and Naive Bayes classification.

    PubMed

    Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W

    2012-01-01

    The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.

  13. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  14. GPER and ERα expression in abnormal endometrial proliferations.

    PubMed

    Tica, Andrei Adrian; Tica, Oana Sorina; Georgescu, Claudia Valentina; Pirici, Daniel; Bogdan, Maria; Ciurea, Tudorel; Mogoantă, Stelian ŞtefăniŢă; Georgescu, Corneliu Cristian; Comănescu, Alexandru Cristian; Bălşeanu, Tudor Adrian; Ciurea, Raluca Niculina; Osiac, Eugen; Buga, Ana Maria; Ciurea, Marius Eugen

    2016-01-01

    G-protein coupled estrogen receptor 1 (GPER), a particular extranuclear estrogen receptor (ER), seems not to be significantly involved in normal female phenotype development but especially associated with severe genital malignancies. This study investigated the GPER expression in different types of normal and abnormal proliferative endometrium, and the correlation with the presence of ERα. GPER was much highly expressed in cytoplasm (than onto cell membrane), contrary to ERα, which was almost exclusively located in the nucleus. Both ERs' densities were higher in columnar epithelial then in stromal cells, according with higher estrogen-sensitivity of epithelial cells. GPER and ERα density decreased as follows: complex endometrial hyperplasia (CEH) > simple endometrial hyperplasia (SHE) > normal proliferative endometrium (NPE) > atypical endometrial hyperplasia (AEH), ERα' density being constantly higher. In endometrial adenocarcinomas, both ERs were significant lower expressed, and widely varied, but GPER÷ERα ratio was significantly increased in high-grade lesions. The nuclear ERα is responsible for the genomic (the most important) mechanism of action of estrogens, involved in cell growth and multiplication. In normal and benign proliferations, ERα expression is increased as an evidence of its effects on cells with conserved architecture, in atypical and especially in malignant cells ERα's (and GPER's) density being much lower. Cytoplasmic GPER probably interfere with different tyrosine÷protein kinases signaling pathways, also involved in cell growth and proliferation. In benign endometrial lesions, GPER's presence is, at least partially, the result of an inductor effect of ERα on GPER gene transcription. In high-grade lesions, GPER÷ERα ratio was increased, demonstrating that GPER is involved per se in malignant endometrial proliferations.

  15. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  16. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  17. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  18. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  19. Performance of synchronous optical receivers using atmospheric compensation techniques.

    PubMed

    Belmonte, Aniceto; Khan, Joseph

    2008-09-01

    We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.

  20. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    PubMed

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  2. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  3. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  4. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  5. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  6. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  7. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.

  8. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  9. Understanding the Influence of Turbulence in Imaging Fourier-Transform Spectrometry of Smokestack Plumes

    DTIC Science & Technology

    2011-03-01

    capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally

  10. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  11. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  12. Relationship between the column density distribution and evolutionary class of molecular clouds as viewed by ATLASGAL

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Kainulainen, J.; Stutz, A.; Henning, Th.; Beuther, H.

    2015-09-01

    We present the first study of the relationship between the column density distribution of molecular clouds within nearby Galactic spiral arms and their evolutionary status as measured from their stellar content. We analyze a sample of 195 molecular clouds located at distances below 5.5 kpc, identified from the ATLASGAL 870 μm data. We define three evolutionary classes within this sample: starless clumps, star-forming clouds with associated young stellar objects, and clouds associated with H ii regions. We find that the N(H2) probability density functions (N-PDFs) of these three classes of objects are clearly different: the N-PDFs of starless clumps are narrowest and close to log-normal in shape, while star-forming clouds and H ii regions exhibit a power-law shape over a wide range of column densities and log-normal-like components only at low column densities. We use the N-PDFs to estimate the evolutionary time-scales of the three classes of objects based on a simple analytic model from literature. Finally, we show that the integral of the N-PDFs, the dense gas mass fraction, depends on the total mass of the regions as measured by ATLASGAL: more massive clouds contain greater relative amounts of dense gas across all evolutionary classes. Appendices are available in electronic form at http://www.aanda.org

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Kevin J.; Parke, Stephen J.

    Quantum mechanical interactions between neutrinos and matter along the path of propagation, the Wolfenstein matter effect, are of particular importance for the upcoming long-baseline neutrino oscillation experiments, specifically the Deep Underground Neutrino Experiment (DUNE). Here, we explore specifically what about the matter density profile can be measured by DUNE, considering both the shape and normalization of the profile between the neutrinos' origin and detection. Additionally, we explore the capability of a perturbative method for calculating neutrino oscillation probabilities and whether this method is suitable for DUNE. We also briefly quantitatively explore the ability of DUNE to measure the Earth's mattermore » density, and the impact of performing this measurement on measuring standard neutrino oscillation parameters.« less

  14. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  15. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  16. Forward modeling of gravity data using geostatistically generated subsurface density variations

    USGS Publications Warehouse

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  17. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  18. On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type

    NASA Astrophysics Data System (ADS)

    Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre

    2016-09-01

    We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.

  19. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    PubMed

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  20. Of pacemakers and statistics: the actuarial method extended.

    PubMed

    Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W

    1980-01-01

    Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.

  1. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  2. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  3. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  4. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  5. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  6. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  7. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  8. Application of the coral health chart to determine bleaching status of Acropora downingi in a subtropical coral reef

    NASA Astrophysics Data System (ADS)

    Oladi, Mahshid; Shokri, Mohammad Reza; Rajabi-Maham, Hassan

    2017-06-01

    The `Coral Health Chart' has become a popular tool for monitoring coral bleaching worldwide. The scleractinian coral Acropora downingi (Wallace 1999) is highly vulnerable to temperature anomalies in the Persian Gulf. Our study tested the reliability of Coral Health Chart scores for the assessment of bleaching-related changes in the mitotic index (MI) and density of zooxanthellae cells in A. downingi in Qeshm Island, the Persian Gulf. The results revealed that, at least under severe conditions, it can be used as an effective proxy for detecting changes in the density of normal, transparent, or degraded zooxanthellae and MI. However, its ability to discern changes in pigment concentration and total zooxanthellae density should be viewed with some caution in the Gulf region, probably because the high levels of environmental variability in this region result in inherent variations in the characteristics of zooxanthellae among "healthy" looking corals.

  9. Preantral follicle density in ovarian biopsy fragments and effects of mare age.

    PubMed

    Alves, K A; Alves, B G; Gastal, G D A; Haag, K T; Gastal, M O; Figueiredo, J R; Gambarini, M L; Gastal, E L

    2017-04-01

    The aims of the present study were to: (1) evaluate preantral follicle density in ovarian biopsy fragments within and among mares; (2) assess the effects of mare age on the density and quality of preantral follicles; and (3) determine the minimum number of ovarian fragments and histological sections needed to estimate equine follicle density using a mathematical model. The ovarian biopsy pick-up method was used in three groups of mares separated according to age (5-6, 7-10 and 11-16 years). Overall, 336 preantral follicles were recorded with a mean follicle density of 3.7 follicles per cm 2 . Follicle density differed (P<0.05) among animals, ovarian fragments from the same animal, histological sections and age groups. More (P<0.05) normal follicles were observed in the 5-6 years (97%) than the 11-16 years (84%) age group. Monte Carlo simulations showed a higher probability (90%; P<0.05) of detecting follicle density using two experimental designs with 65 histological sections and three to four ovarian fragments. In summary, equine follicle density differed among animals and within ovarian fragments from the same animal, and follicle density and morphology were negatively affected by aging. Moreover, three to four ovarian fragments with 65 histological sections were required to accurately estimate follicle density in equine ovarian biopsy fragments.

  10. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  11. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  12. Polyad breaking phenomenon associated with a local-to-normal mode transition and suitability to estimate force constants

    NASA Astrophysics Data System (ADS)

    Bermúdez-Montaña, M.; Lemus, R.; Castaños, O.

    2017-12-01

    In a system of two interacting harmonic oscillators a local-to-normal mode transition is manifested as a polyad breaking phenomenon. This phenomenon is associated with the suitability to estimate zeroth-order force constants in the framework of a local mode description. This transition is also exhibited in two interacting Morse oscillators. To study this case, an appropriate parameterisation going from a molecule with local mode behaviour (H2O) to a molecule presenting a normal mode behaviour (CO2) is introduced. Concepts from quantum mechanics like fidelity, entropy and probability density, as well from nonlinear classical mechanics like Poincaré sections are used to detect the transition region. It is found that fidelity and entropy are sensitive complementary properties to detect the local-to-normal transition. Poincaré sections allow the local-to-normal transition to be detected through the appearance of chaos as a consequence of the polyad breaking phenomenon. In addition, two kinds of avoided energy crossings are identified in accordance with the different regions of the spectrum.

  13. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  14. Multiwavelength Studies of Rotating Radio Transients

    NASA Astrophysics Data System (ADS)

    Miller, Joshua J.

    Seven years ago, a new class of pulsars called the Rotating Radio Transients (RRATs) was discovered with the Parkes radio telescope in Australia (McLaughlin et al., 2006). These neutron stars are characterized by strong radio bursts at repeatable dispersion measures, but not detectable using standard periodicity-search algorithms. We now know of roughly 100 of these objects, discovered in new surveys and re-analysis of archival survey data. They generally have longer periods than those of the normal pulsar population, and several have high magnetic fields, similar to those other neutron star populations like the X-ray bright magnetars. However, some of the RRATs have spin-down properties very similar to those of normal pulsars, making it difficult to determine the cause of their unusual emission and possible evolutionary relationships between them and other classes of neutron stars. We have calculated single-pulse flux densities for eight RRAT sources observed using the Parkes radio telescope. Like normal pulsars, the pulse amplitude distributions are well described by log-normal probability distribution functions, though two show evidence for an additional power-law tail. Spectral indices are calculated for the seven RRATs which were detected at multiple frequencies. These RRATs have a mean spectral index of = -3.2(7), or = -3.1(1) when using mean flux densities derived from fitting log-normal probability distribution functions to the pulse amplitude distributions, suggesting that the RRATs have steeper spectra than normal pulsars. When only considering the three RRATs for which we have a wide range of observing frequencies, however, and become --1.7(1) and --2.0(1), respectively, and are roughly consistent with those measured for normal pulsars. In all cases, these spectral indices exclude magnetar-like flat spectra. For PSR J1819--1458, the RRAT with the highest bursting rate, pulses were detected at 685 and 3029 MHz in simultaneous observations and have a spectral index consistent with our other analysis. We also present the results of simultaneous radio and X-ray observations of PSR J1819--1458. Our 94-ks XMM-Newton observation of the high magnetic field (~5x109 T) pulsar reveals a blackbody spectrum ( kT~130 eV) with a broad absorption feature, possibly composed of two lines at ~1.0 and ~1.3 keV. We performed a correlation analysis of the X-ray photons with radio pulses detected in 16.2 hours of simultaneous observations at 1--2 GHz with the Green Bank, Effelsberg, and Parkes telescopes, respectively. Both the detected X-ray photons and radio pulses appear to be randomly distributed in time. We find tentative evidence for a correlation between the detected radio pulses and X-ray photons on timescales of less than 10 pulsar spin periods, with the probability of this occurring by chance being 0.46%. This suggests that the physical process producing the radio pulses may also heat the polar cap.

  15. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  16. Estimation of Characteristics of Echo Envelope Using RF Echo Signal from the Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Ikeda, Kazuki; Moriyasu, Norifumi

    2001-05-01

    To realize quantitative diagnosis of liver cirrhosis, we have been analyzing the probability density function (PDF) of echo amplitude using B-mode images. However, the B-mode image is affected by the various signal and image processing techniques used in the diagnosis equipment, so a detailed and quantitative analysis is very difficult. In this paper, we analyze the PDF of echo amplitude using RF echo signal and B-mode images of normal and cirrhotic livers, and compare both results to examine the validity of the RF echo signal.

  17. Giant increase of critical current density and vortex pinning in Mn doped K{sub x}Fe{sub 2−y}Se{sub 2} single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingtao; Zhang, Jincang, E-mail: jczhang@staff.shu.edu.cn; Materials Genome Institute, Shanghai University, Shanghai 200444

    2014-11-10

    We report a comparative study of the critical current density (J{sub c}) and vortex pinning among pure and Mn doped K{sub x}Fe{sub 2−y}Se{sub 2} single crystals. It is found that the J{sub c} values can be greatly improved by Mn doping and post-quenching treatment when comparing to pristine pure sample. In contrast to pure samples, an anomalous second magnetization peak (SMP) effect is observed in both 1% and 2% Mn doped samples at T = 3 K for H∥ab but not for H∥c. Referring to Dew-Hughes and Kramer's model, we performed scaling analyses of the vortex pinning force density vs magnetic field inmore » 1% Mn doped and quenched pristine crystals. The results show that the normal point defects are the dominant pinning sources, which probably originate from the variations of intercalated K atoms. We propose that the large nonsuperconducting K-Mn-Se inclusions may contribute to the partial normal surface pinning and give rise to the anomalous SMP effect for H∥ab in Mn doped crystals. These results may facilitate further understanding of the superconductivity and vortex pinning in intercalated iron-selenides superconductors.« less

  18. Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.

    2015-06-01

    We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29

  19. Ocular Effects of Exposure to 40, 75, and 95 GHz Millimeter Waves

    NASA Astrophysics Data System (ADS)

    Kojima, Masami; Suzuki, Yukihisa; Sasaki, Kensuke; Taki, Masao; Wake, Kanako; Watanabe, Soichi; Mizuno, Maya; Tasaki, Takafumi; Sasaki, Hiroshi

    2018-05-01

    The objective of this study was to develop a model of ocular damage induced by 40, 75, and 95 GHz continuous millimeter waves (MMW), thereby allowing assessment of the clinical course of ocular damage resulting from exposure to thermal damage-inducing MMW. This study also examined the dependence of ocular damage on incident power density. Pigmented rabbit eyes were exposed to 40, 75, and 95 GHz MMW from a spot-focus-type lens antenna. Slight ocular damage was observed 10 min after MMW exposure, including reduced cornea thickness and reduced transparency. Diffuse fluorescein staining around the pupillary area indicated corneal epithelial injury. Slit-lamp examination 1 day after MMW exposure revealed a round area of opacity, accompanied by fluorescence staining, in the central pupillary zone. Corneal edema, indicative of corneal stromal damage, peaked 1 day after MMW exposure, with thickness gradually subsiding to normal. Three days after exposure, ocular conditions had almost normalized, though corneal thickness was slightly greater than that before exposure. The 50% probability of ocular damage (DD50) was in the order 40 > 95 ≈ 75 GHz at the same incident power densities.

  20. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  1. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  2. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  3. The shapes of column density PDFs. The importance of the last closed contour

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2017-10-01

    The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.

  4. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. 2MASS wide-field extinction maps. V. Corona Australis

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2014-05-01

    We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18

  7. SU-G-BRC-08: Evaluation of Dose Mass Histogram as a More Representative Dose Description Method Than Dose Volume Histogram in Lung Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Eldib, A; Ma, C

    2016-06-15

    Purpose: Dose-volume-histogram (DVH) is widely used for plan evaluation in radiation treatment. The concept of dose-mass-histogram (DMH) is expected to provide a more representative description as it accounts for heterogeneity in tissue density. This study is intended to assess the difference between DVH and DMH for evaluating treatment planning quality. Methods: 12 lung cancer treatment plans were exported from the treatment planning system. DVHs for the planning target volume (PTV), the normal lung and other structures of interest were calculated. DMHs were calculated in a similar way as DVHs expect that the voxel density converted from the CT number wasmore » used in tallying the dose histogram bins. The equivalent uniform dose (EUD) was calculated based on voxel volume and mass, respectively. The normal tissue complication probability (NTCP) in relation to the EUD was calculated for the normal lung to provide quantitative comparison of DVHs and DMHs for evaluating the radiobiological effect. Results: Large differences were observed between DVHs and DMHs for lungs and PTVs. For PTVs with dense tumor cores, DMHs are higher than DVHs due to larger mass weighing in the high dose conformal core regions. For the normal lungs, DMHs can either be higher or lower than DVHs depending on the target location within the lung. When the target is close to the lower lung, DMHs show higher values than DVHs because the lower lung has higher density than the central portion or the upper lung. DMHs are lower than DVHs for targets in the upper lung. The calculated NTCPs showed a large range of difference between DVHs and DMHs. Conclusion: The heterogeneity of lung can be well considered using DMH for evaluating target coverage and normal lung pneumonitis. Further studies are warranted to quantify the benefits of DMH over DVH for plan quality evaluation.« less

  8. Effects of a mixture of chloromethylisothiazolinone and methylisothiazolinone on peripheral airway dysfunction in children

    PubMed Central

    Cho, Hyun-Ju; Park, Dong-Uk; Yoon, Jisun; Lee, Eun; Yang, Song-I; Kim, Young-Ho; Lee, So-Yeon

    2017-01-01

    Background Children who were only exposed to a mixture of chloromethylisothiazolinone (CMIT) and methylisothiazolinone (MIT) as humidifier disinfectant (HD) components were evaluated for humidifier disinfectant-associated lung injury (HDLI) from 2012. This study was to evaluate the pulmonary function using, impulse oscillometry (IOS) for children exposed to a mixture of CMIT/MIT from HD. Methods Twenty-four children who were only exposed to a mixture of CMIT/MIT, with no previous underlying disease, were assessed by IOS. Diagnostic criteria for HDLI were categorized as definite, probable, possible, or unlikely. Home visits and administration of a standardized questionnaire were arranged to assess exposure characteristics. Results Definite and probable cases showed higher airborne disinfectant exposure intensity during sleep (32.4 ± 8.7 μg/m3) and younger age at initial exposure (3.5 ± 3.3 months) compared with unlikely cases (17.3 ± 11.0 μg/m3, p = 0.026; 22.5 ± 26.2 months, p = 0.039, respectively). Reactance at 5 Hz was significantly more negative in those with high-density exposure during sleep (mean, -0.463 kPa/L/s vs. low density, -0.296, p = 0.001). The reactance area was also higher with high-density exposure during sleep (mean, 3.240 kPa/L vs. low density, 1.922, p = 0.039). The mean bronchodilator response with high-density exposure was within the normal range for reactance. Conclusions Significant peripheral airway dysfunction were found in children with high levels of inhalation exposure to a mixture of CMIT/MIT during sleep. Strict regulation of a mixture of CMIT/MIT exposure were associated with positive effects on lung function of children. PMID:28453578

  9. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  11. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  12. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  13. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  14. Analysis of the nonlinearity of Asian summer monsoon intraseasonal variability using spherical PDFs

    NASA Astrophysics Data System (ADS)

    Jajcay, Nikola; Hannachi, Abdel

    2013-04-01

    The Asian summer monsoon (ASM) is a high-dimensional and highly complex phenomenon affecting more than one fifth of the world population. The intraseasonal component of the ASM undergoes periods of active and break phases associated respectively with enhanced and reduced rainfall over the Indian subcontinent and surroundings. In this paper the nonlinear nature of the intraseasonal monsoon variability is investigated using the leading EOFs of ERA-40 sea level pressure reanalyses field over the ASM region. The probability density function is then computed in spherical coordinates using a Epaneshnikov kernel method. Three significant modes are identified. They represent respectively (i) East - West mode with above normal sea level pressure over East China sea and below normal pressure over Himalayas, (ii) mode with above normal sea level pressure over East China sea (without compensating centre of opposite sign as in (i)) and (iii) mode with below normal sea level pressure over East China sea (same as (ii) but with opposite sign). Relationship to large scale flow are also investigated and discussed.

  15. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  16. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  17. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  18. Noise-induced transitions in a double-well oscillator with nonlinear dissipation.

    PubMed

    Semenov, Vladimir V; Neiman, Alexander B; Vadivasova, Tatyana E; Anishchenko, Vadim S

    2016-05-01

    We develop a model of bistable oscillator with nonlinear dissipation. Using a numerical simulation and an electronic circuit realization of this system we study its response to additive noise excitations. We show that depending on noise intensity the system undergoes multiple qualitative changes in the structure of its steady-state probability density function (PDF). In particular, the PDF exhibits two pitchfork bifurcations versus noise intensity, which we describe using an effective potential and corresponding normal form of the bifurcation. These stochastic effects are explained by the partition of the phase space by the nullclines of the deterministic oscillator.

  19. A product Pearson-type VII density distribution

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2008-01-01

    The Pearson-type VII distributions (containing the Student's t distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.

  20. A Cellular Automaton model for pedestrian counterflow with swapping

    NASA Astrophysics Data System (ADS)

    Tao, Y. Z.; Dong, L. Y.

    2017-06-01

    In this paper, we propose a new floor field Cellular Automaton (CA) model with considering the swapping behaviors of pedestrians. The neighboring pedestrians in opposite directions take swapping in a probability decided by the linear density of pedestrian flow. The swapping which happens simultaneously with the normal movement is introduced to eliminate the gridlock in low density region. Numerical results show that the fundamental diagram is in good agreement with the measured data. Then the model is applied to investigate the counterflow and four typical states such as free flow, lane, intermediate and congestion states are found. More attention is paid on the intermediate state which lane-formation and local congestions switch in an irregular manner. The swapping plays a vital role in reducing the gridlock. Furthermore, the influence of the corridor size and individual's eyesight on counterflow are discussed in detail.

  1. PFOS induced lipid metabolism disturbances in BALB/c mice through inhibition of low density lipoproteins excretion

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Wang, Yu; Liang, Yong; Li, Jia; Liu, Yuchen; Zhang, Jie; Zhang, Aiqian; Fu, Jianjie; Jiang, Guibin

    2014-04-01

    Male BALB/c mice fed with either a regular or high fat diet were exposed to 0, 5 or 20 mg/kg perfluorooctane sulfonate (PFOS) for 14 days. Increased body weight, serum glucose, cholesterol and lipoprotein levels were observed in mice given a high fat diet. However, all PFOS-treated mice got reduced levels of serum lipid and lipoprotein. Decreasing liver glycogen content was also observed, accompanied by reduced serum glucose levels. Histological and ultrastructural examination detected more lipid droplets accumulated in hepatocytes after PFOS exposure. Moreover, transcripitonal activity of lipid metabolism related genes suggests that PFOS toxicity is probably unrelevant to PPARα's transcription. The present study demonstrates a lipid disturbance caused by PFOS and thus point to its role in inhibiting the secretion and normal function of low density lipoproteins.

  2. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  3. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  4. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  6. Principles of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Landé, Alfred

    2013-10-01

    Preface; Introduction: 1. Observation and interpretation; 2. Difficulties of the classical theories; 3. The purpose of quantum theory; Part I. Elementary Theory of Observation (Principle of Complementarity): 4. Refraction in inhomogeneous media (force fields); 5. Scattering of charged rays; 6. Refraction and reflection at a plane; 7. Absolute values of momentum and wave length; 8. Double ray of matter diffracting light waves; 9. Double ray of matter diffracting photons; 10. Microscopic observation of ρ (x) and σ (p); 11. Complementarity; 12. Mathematical relation between ρ (x) and σ (p) for free particles; 13. General relation between ρ (q) and σ (p); 14. Crystals; 15. Transition density and transition probability; 16. Resultant values of physical functions; matrix elements; 17. Pulsating density; 18. General relation between ρ (t) and σ (є); 19. Transition density; matrix elements; Part II. The Principle of Uncertainty: 20. Optical observation of density in matter packets; 21. Distribution of momenta in matter packets; 22. Mathematical relation between ρ and σ; 23. Causality; 24. Uncertainty; 25. Uncertainty due to optical observation; 26. Dissipation of matter packets; rays in Wilson Chamber; 27. Density maximum in time; 28. Uncertainty of energy and time; 29. Compton effect; 30. Bothe-Geiger and Compton-Simon experiments; 31. Doppler effect; Raman effect; 32. Elementary bundles of rays; 33. Jeans' number of degrees of freedom; 34. Uncertainty of electromagnetic field components; Part III. The Principle of Interference and Schrödinger's equation: 35. Physical functions; 36. Interference of probabilities for p and q; 37. General interference of probabilities; 38. Differential equations for Ψp (q) and Xq (p); 39. Differential equation for фβ (q); 40. The general probability amplitude Φβ' (Q); 41. Point transformations; 42. General theorem of interference; 43. Conjugate variables; 44. Schrödinger's equation for conservative systems; 45. Schrödinger's equation for non-conservative systems; 46. Pertubation theory; 47. Orthogonality, normalization and Hermitian conjugacy; 48. General matrix elements; Part IV. The Principle of Correspondence: 49. Contact transformations in classical mechanics; 50. Point transformations; 51. Contact transformations in quantum mechanics; 52. Constants of motion and angular co-ordinates; 53. Periodic orbits; 54. De Broglie and Schrödinger function; correspondence to classical mechanics; 55. Packets of probability; 56. Correspondence to hydrodynamics; 57. Motion and scattering of wave packets; 58. Formal correspondence between classical and quantum mechanics; Part V. Mathematical Appendix: Principle of Invariance: 59. The general theorem of transformation; 60. Operator calculus; 61. Exchange relations; three criteria for conjugacy; 62. First method of canonical transformation; 63. Second method of canonical transformation; 64. Proof of the transformation theorem; 65. Invariance of the matrix elements against unitary transformations; 66. Matrix mechanics; Index of literature; Index of names and subjects.

  7. Quantum fluctuation theorems and generalized measurements during the force protocol.

    PubMed

    Watanabe, Gentaro; Venkatesh, B Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010); Phys. Rev. E 83, 041114 (2011)], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  8. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  9. The bingo model of survivorship: 1. probabilistic aspects.

    PubMed

    Murphy, E A; Trojak, J E; Hou, W; Rohde, C A

    1981-01-01

    A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.

  10. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  11. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  12. A statistical treatment of bioassay pour fractions

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack; Hughes, David

    A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.

  13. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  14. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  15. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  16. Comparison of amyloid plaque contrast generated by T2-, T2*-, and susceptibility-weighted imaging methods in transgenic mouse models of Alzheimer’s disease

    PubMed Central

    Chamberlain, Ryan; Reyes, Denise; Curran, Geoffrey L.; Marjanska, Malgorzata; Wengenack, Thomas M.; Poduslo, Joseph F.; Garwood, Michael; Jack, Clifford R.

    2009-01-01

    One of the hallmark pathologies of Alzheimer’s disease (AD) is amyloid plaque deposition. Plaques appear hypointense on T2- and T2*-weighted MR images probably due to the presence of endogenous iron, but no quantitative comparison of various imaging techniques has been reported. We estimated the T1, T2, T2*, and proton density values of cortical plaques and normal cortical tissue and analyzed the plaque contrast generated by a collection of T2-, T2*-, and susceptibility-weighted imaging (SWI) methods in ex vivo transgenic mouse specimens. The proton density and T1 values were similar for both cortical plaques and normal cortical tissue. The T2 and T2* values were similar in cortical plaques, which indicates that the iron content of cortical plaques may not be as large as previously thought. Ex vivo plaque contrast was increased compared to a previously reported spin echo sequence by summing multiple echoes and by performing SWI; however, gradient echo and susceptibility weighted imaging was found to be impractical for in vivo imaging due to susceptibility interface-related signal loss in the cortex. PMID:19253386

  17. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  18. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  19. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  20. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  1. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  2. Mycobacterial Cultures Contain Cell Size and Density Specific Sub-populations of Cells with Significant Differential Susceptibility to Antibiotics, Oxidative and Nitrite Stress

    PubMed Central

    Vijay, Srinivasan; Nair, Rashmi Ravindran; Sharan, Deepti; Jakkala, Kishor; Mukkayyan, Nagaraja; Swaminath, Sharmada; Pradhan, Atul; Joshi, Niranjan V.; Ajitkumar, Parthasarathi

    2017-01-01

    The present study shows the existence of two specific sub-populations of Mycobacterium smegmatis and Mycobacterium tuberculosis cells differing in size and density, in the mid-log phase (MLP) cultures, with significant differential susceptibility to antibiotic, oxidative, and nitrite stress. One of these sub-populations (~10% of the total population), contained short-sized cells (SCs) generated through highly-deviated asymmetric cell division (ACD) of normal/long-sized mother cells and symmetric cell divisions (SCD) of short-sized mother cells. The other sub-population (~90% of the total population) contained normal/long-sized cells (NCs). The SCs were acid-fast stainable and heat-susceptible, and contained high density of membrane vesicles (MVs, known to be lipid-rich) on their surface, while the NCs possessed negligible density of MVs on the surface, as revealed by scanning and transmission electron microscopy. Percoll density gradient fractionation of MLP cultures showed the SCs-enriched fraction (SCF) at lower density (probably indicating lipid-richness) and the NCs-enriched fraction (NCF) at higher density of percoll fractions. While live cell imaging showed that the SCs and the NCs could grow and divide to form colony on agarose pads, the SCF, and NCF cells could independently regenerate MLP populations in liquid and solid media, indicating their full genomic content and population regeneration potential. CFU based assays showed the SCF cells to be significantly more susceptible than NCF cells to a range of concentrations of rifampicin and isoniazid (antibiotic stress), H2O2 (oxidative stress),and acidified NaNO2 (nitrite stress). Live cell imaging showed significantly higher susceptibility of the SCs of SC-NC sister daughter cell pairs, formed from highly-deviated ACD of normal/long-sized mother cells, to rifampicin and H2O2, as compared to the sister daughter NCs, irrespective of their comparable growth rates. The SC-SC sister daughter cell pairs, formed from the SCDs of short-sized mother cells and having comparable growth rates, always showed comparable stress-susceptibility. These observations and the presence of M. tuberculosis SCs and NCs in pulmonary tuberculosis patients' sputum earlier reported by us imply a physiological role for the SCs and the NCs under the stress conditions. The plausible reasons for the higher stress susceptibility of SCs and lower stress susceptibility of NCs are discussed. PMID:28377757

  3. Current distribution in tissues with conducted electrical weapons operated in drive-stun mode.

    PubMed

    Panescu, Dorin; Kroll, Mark W; Brave, Michael

    2016-08-01

    The TASER® conducted electrical weapon (CEW) is best known for delivering electrical pulses that can temporarily incapacitate subjects by overriding normal motor control. The alternative drive-stun mode is less understood and the goal of this paper is to analyze the distribution of currents in tissues when the CEW is operated in this mode. Finite element modeling (FEM) was used to approximate current density in tissues with boundary electrical sources placed 40 mm apart. This separation was equivalent to the distance between drive-stun mode TASER X26™, X26P, X2 CEW electrodes located on the device itself and between those located on the expended CEW cartridge. The FEMs estimated the amount of current flowing through various body tissues located underneath the electrodes. The FEM simulated the attenuating effects of both a thin and of a normal layer of fat. The resulting current density distributions were used to compute the residual amount of current flowing through deeper layers of tissue. Numerical modeling estimated that the skin, fat and skeletal muscle layers passed at least 86% or 91% of total CEW current, assuming a thin or normal fat layer thickness, respectively. The current density and electric field strength only exceeded thresholds which have increased probability for ventricular fibrillation (VFTJ), or for cardiac capture (CCTE), in the skin and the subdermal fat layers. The fat layer provided significant attenuation of drive-stun CEW currents. Beyond the skeletal muscle layer, only fractional amounts of the total CEW current were estimated to flow. The regions presenting risk for VF induction or for cardiac capture were well away from the typical heart depth.

  4. Scanning laser densitometry and color perimetry demonstrate reduced photopigment density and sensitivity in two patients with retinal degeneration.

    PubMed

    Tornow, R P; Stilling, R; Zrenner, E

    1999-10-01

    To test the feasibility of scanning laser densitometry with a modified Rodenstock scanning laser ophthalmoscope (SLO) to measure the rod and cone photopigment distribution in patients with retinal diseases. Scanning laser densitometry was performed using a modified Rodenstock scanning laser ophthalmoscope. The distribution of the photopigments was calculated from dark adapted and bleached images taken with the 514 nm laser of the SLO. This wavelength is absorbed by rod and cone photopigments. Discrimination is possible due to their different spatial distribution. Additionally, to measure retinal sensitivity profiles, dark adapted two color static perimetry with a Tübinger manual perimeter was performed along the horizontal meridian with 1 degree spacing. A patient with retinitis pigmentosa had slightly reduced photopigment density within the central +/- 5 degrees but no detectable photopigment for eccentricities beyond 5 degrees. A patient with cone dystrophy had nearly normal pigment density beyond +/- 5 degrees, but considerably reduced photopigment density within the central +/- 5 degrees. Within the central +/- 5 degrees, the patient with retinitis pigmentosa had normal sensitivity for the red stimulus and reduced sensitivity for the green stimulus. There was no measurable function beyond 7 degrees. The patient with cone dystrophy had normal sensitivity for the green stimulus outside the foveal center and reduced sensitivity for the red stimulus at the foveal center. The results of color perimetry for this patient with a central scotoma were probably influenced by eccentric fixation. Scanning laser densitometry with a modified Rodenstock SLO is a useful method to assess the human photopigment distribution. Densitometry results were confirmed by dark adapted two color static perimetry. Photopigment distribution and retinal sensitivity profiles can be measured with high spatial resolution. This may help to measure exactly the temporal development of retinal diseases and to test the success of different therapeutic treatments. Both methods have limitations at the present state of development. However, some of these limitations can be overcome by further improving the instruments.

  5. In Search of Sleep Biomarkers of Alzheimer’s Disease: K-Complexes Do Not Discriminate between Patients with Mild Cognitive Impairment and Healthy Controls

    PubMed Central

    Reda, Flaminia; Gorgoni, Maurizio; Lauri, Giulia; Truglia, Ilaria; Cordone, Susanna; Scarpelli, Serena; Mangiaruga, Anastasia; D’Atri, Aurora; Ferrara, Michele; Lacidogna, Giordano; Marra, Camillo; Rossini, Paolo Maria; De Gennaro, Luigi

    2017-01-01

    The K-complex (KC) is one of the hallmarks of Non-Rapid Eye Movement (NREM) sleep. Recent observations point to a drastic decrease of spontaneous KCs in Alzheimer’s disease (AD). However, no study has investigated when, in the development of AD, this phenomenon starts. The assessment of KC density in mild cognitive impairment (MCI), a clinical condition considered a possible transitional stage between normal cognitive function and probable AD, is still lacking. The aim of the present study was to compare KC density in AD/MCI patients and healthy controls (HCs), also assessing the relationship between KC density and cognitive decline. Twenty amnesic MCI patients underwent a polysomnographic recording of a nocturnal sleep. Their data were compared to those of previously recorded 20 HCs and 20 AD patients. KCs during stage 2 NREM sleep were visually identified and KC densities of the three groups were compared. AD patients showed a significant KC density decrease compared with MCI patients and HCs, while no differences were observed between MCI patients and HCs. KC density was positively correlated with Mini-Mental State Examination (MMSE) scores. Our results point to the existence of an alteration of KC density only in a full-blown phase of AD, which was not observable in the early stage of the pathology (MCI), but linked with cognitive deterioration. PMID:28468235

  6. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  7. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  8. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  9. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    NASA Astrophysics Data System (ADS)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  10. Slowdown of Interhelical Motions Induces a Glass Transition in RNA

    PubMed Central

    Frank, Aaron T.; Zhang, Qi; Al-Hashimi, Hashim M.; Andricioaei, Ioan

    2015-01-01

    RNA function depends crucially on the details of its dynamics. The simplest RNA dynamical unit is a two-way interhelical junction. Here, for such a unit—the transactivation response RNA element—we present evidence from molecular dynamics simulations, supported by nuclear magnetic resonance relaxation experiments, for a dynamical transition near 230 K. This glass transition arises from the freezing out of collective interhelical motional modes. The motions, resolved with site-specificity, are dynamically heterogeneous and exhibit non-Arrhenius relaxation. The microscopic origin of the glass transition is a low-dimensional, slow manifold consisting largely of the Euler angles describing interhelical reorientation. Principal component analysis over a range of temperatures covering the glass transition shows that the abrupt slowdown of motion finds its explanation in a localization transition that traps probability density into several disconnected conformational pools over the low-dimensional energy landscape. Upon temperature increase, the probability density pools then flood a larger basin, akin to a lakes-to-sea transition. Simulations on transactivation response RNA are also used to backcalculate inelastic neutron scattering data that match previous inelastic neutron scattering measurements on larger and more complex RNA structures and which, upon normalization, give temperature-dependent fluctuation profiles that overlap onto a glass transition curve that is quasi-universal over a range of systems and techniques. PMID:26083927

  11. Wave turbulence in shallow water models.

    PubMed

    Clark di Leoni, P; Cobelli, P J; Mininni, P D

    2014-06-01

    We study wave turbulence in shallow water flows in numerical simulations using two different approximations: the shallow water model and the Boussinesq model with weak dispersion. The equations for both models were solved using periodic grids with up to 2048{2} points. In all simulations, the Froude number varies between 0.015 and 0.05, while the Reynolds number and level of dispersion are varied in a broader range to span different regimes. In all cases, most of the energy in the system remains in the waves, even after integrating the system for very long times. For shallow flows, nonlinear waves are nondispersive and the spectrum of potential energy is compatible with ∼k{-2} scaling. For deeper (Boussinesq) flows, the nonlinear dispersion relation as directly measured from the wave and frequency spectrum (calculated independently) shows signatures of dispersion, and the spectrum of potential energy is compatible with predictions of weak turbulence theory, ∼k{-4/3}. In this latter case, the nonlinear dispersion relation differs from the linear one and has two branches, which we explain with a simple qualitative argument. Finally, we study probability density functions of the surface height and find that in all cases the distributions are asymmetric. The probability density function can be approximated by a skewed normal distribution as well as by a Tayfun distribution.

  12. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  13. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance

    NASA Astrophysics Data System (ADS)

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K . We first show that the interactions generated by our rule are nearly independent of K . Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time tw also survives at t >tw . Finally, survival probabilities of species are shown to decay as powers of the ratio t /tw , a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  14. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance.

    PubMed

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K. We first show that the interactions generated by our rule are nearly independent of K. Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time t_{w} also survives at t>t_{w}. Finally, survival probabilities of species are shown to decay as powers of the ratio t/t_{w}, a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  15. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  16. Vibrational modes of thin oblate clouds of charge

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Spencer, Ross L.

    2002-07-01

    A numerical method is presented for finding the eigenfunctions (normal modes) and mode frequencies of azimuthally symmetric non-neutral plasmas confined in a Penning trap whose axial thickness is much smaller than their radial size. The plasma may be approximated as a charged disk in this limit; the normal modes and frequencies can be found if the surface charge density profile σ(r) of the disk and the trap bounce frequency profile ωz(r) are known. The dependence of the eigenfunctions and equilibrium plasma shapes on nonideal components of the confining Penning trap fields is discussed. The results of the calculation are compared with the experimental data of Weimer et al. [Phys. Rev. A 49, 3842 (1994)] and it is shown that the plasma in this experiment was probably hollow and had mode displacement functions that were concentrated near the center of the plasma.

  17. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  18. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  19. Different subcellular localization of neurotensin-receptor and neurotensin-acceptor sites in the rat brain dopaminergic system.

    PubMed

    Schotte, A; Rostène, W; Laduron, P M

    1988-04-01

    The subcellular localization of neurotensin-receptor sites (NT2 sites) and neurotensin-acceptor sites (NT1 sites) was studied in rat caudate-putamen by isopycnic centrifugation in sucrose density gradients. [3H]Neurotensin binding to NT2 sites occurred as a major peak at higher sucrose densities, colocalized with [3H]dopamine uptake, and as a small peak at a lower density; whereas binding to NT1 sites occurred as a single large peak at an intermediate density. 6-Hydroxydopamine lesions of the median forebrain bundle resulted in a total loss of NT2 sites in the caudate-putamen but did not affect NT2 sites in the nucleus accumbens and the olfactory tubercle. NT1 sites were not affected. Kainic acid injections into the rat caudate-putamen led to a partial decrease of NT1 sites in this region 5 days later. After a few weeks they returned to normal. Therefore NT2 sites are probably associated with presynaptic nigrostriatal dopaminergic terminals in the caudate-putamen but not in the nucleus accumbens and the olfactory tubercle. A possible association of NT1 sites with glial cells is suggested.

  20. Some Exact Results for the Schroedinger Wave Equation with a Time Dependent Potential

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2009-01-01

    The time dependent Schroedinger equation with a time dependent delta function potential is solved exactly for many special cases. In all other cases the problem can be reduced to an integral equation of the Volterra type. It is shown that by knowing the wave function at the origin, one may derive the wave function everywhere. Thus, the problem is reduced from a PDE in two variables to an integral equation in one. These results are used to compare adiabatic versus sudden changes in the potential. It is shown that adiabatic changes in the p otential lead to conservation of the normalization of the probability density.

  1. Changes in recruitment of Rhesus soleus and gastrocnemius muscles following a 14 day spaceflight

    NASA Technical Reports Server (NTRS)

    Hodgson, J. A.; Bodine-Fowler, S. C.; Roy, R. R.; De Leon, R. D.; De Guzman, C. P.; Koslovskaia, I.; Sirota, M.; Edgerton, V. R.

    1991-01-01

    The effect of microgravity on the recruitment patterns of the soleus, gastrocnemius, and tibialis-anterior muscles was investigated by comparing electromyograms (EMGs) of these muscles of Rhesus monkeys implanted with EMG electrodes, taken before and after a 14-day flight on board Cosmos 2044. It was found that the EMG amplitude values in the soleus muscle decreased after the spaceflight but returned to normal values over the 2-wk recovery period. The medial amplitudes of gastrocnemius and tibialis anterior were not changed by flight. Joint probability density distributions displayed changes after flight in both the soleus and gastrocnemius muscles, but not in tibialis anterior.

  2. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  3. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  4. Quantum fluctuation theorems and generalized measurements during the force protocol

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010), 10.1103/PhysRevLett.105.140601; Phys. Rev. E 83, 041114 (2011), 10.1103/PhysRevE.83.041114], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  5. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  6. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20

    DOE PAGES

    Michael, J. Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens. It was shown that the analytical form for normalization coefficients is available primarily forl ≤ 4. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7.more » In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle–Coppens method in the Wolfram Mathematicasoftware to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.« less

  7. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  8. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  9. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    PubMed

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.

  10. Multi-level biological responses in Ucides cordatus (Linnaeus, 1763) (Brachyura, Ucididae) as indicators of conservation status in mangrove areas from the western atlantic.

    PubMed

    Duarte, Luis Felipe de Almeida; Souza, Caroline Araújo de; Nobre, Caio Rodrigues; Pereira, Camilo Dias Seabra; Pinheiro, Marcelo Antonio Amaro

    2016-11-01

    There is a global lack of knowledge on tropical ecotoxicology, particularly in terms of mangrove areas. These areas often serve as nurseries or homes for several animal species, including Ucides cordatus (the uçá crab). This species is widely distributed, is part of the diet of human coastal communities, and is considered to be a sentinel species due to its sensitivity to toxic xenobiotics in natural environments. Sublethal damages to benthic populations reveal pre-pathological conditions, but discussions of the implications are scarce in the literature. In Brazil, the state of São Paulo offers an interesting scenario for ecotoxicology and population studies: it is easy to distinguish between mangroves that are well preserved and those which are significantly impacted by human activity. The objectives of this study were to provide the normal baseline values for the frequency of Micronucleated cells (MN‰) and for neutral red retention time (NRRT) in U. cordatus at pristine locations, as well to indicate the conservation status of different mangrove areas using a multi-level biological response approach in which these biomarkers and population indicators (condition factor and crab density) are applied in relation to environmental quality indicators (determined via information in the literature and solid waste volume). A mangrove area with no effects of impact (areas of reference or pristine areas) presented a mean value of MN‰<3 and NRRT>120min, values which were assumed as baseline values representing genetic and physiological normality. A significant correlation was found between NRRT and MN, with both showing similar and effective results for distinguishing between different mangrove areas according to conservation status. Furthermore, crab density was lower in more impacted mangrove areas, a finding which also reflects the effects of sublethal damage; this finding was not determined by condition factor measurements. Multi-level biological responses were able to reflect the conservation status of the mangrove areas studied using information on guideline values of MN‰, NRRT, and density of the uçá crab in order to categorize three levels of human impacts in mangrove areas: PNI (probable null impact); PLI (probable low impact); and PHI (probable high impact). Results confirm the success of U. cordatus species' multi-level biological responses in diagnosing threats to mangrove areas. Therefore, this species represents an effective tool in studies on mangrove conservation statuses in the Western Atlantic. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  12. Cyclic steps and superimposed antidune deposits: important elements of coarse-grained deepwater channel-levée complexes

    NASA Astrophysics Data System (ADS)

    Lang, Joerg; Brandes, Christian; Winsemann, Jutta

    2017-04-01

    The facies distribution and architecture of submarine fans can be strongly impacted by erosion and deposition by supercritical density flows. We present field examples from the Sandino Forearc Basin (southern Central America), where cyclic-step and antidune deposits represent important sedimentary facies of coarse-grained channel-levée complexes. These bedforms occur in all sub-environments of the depositional systems and relate to the different stages of avulsion, bypass, levée construction and channel backfilling. Large-scale scours (18 to 29 m deep, 18 to 25 m wide, 60 to >120 m long) with an amalgamated infill, comprising massive, normally coarse-tail graded or spaced subhorizontally stratified conglomerates and pebbly sandstones, are interpreted as deposits of the hydraulic-jump zone of cyclic steps. These cyclic steps probably formed during avulsion, when high-density flows were routed into the evolving channel. The large-scale scour fills can be distinguished from small-scale channel fills based on the preservation of a steep upper margin and a coarse-grained infill comprising mainly amalgamated hydraulic-jump deposits. Channel fills include repetitive successions deposited by cyclic steps with superimposed antidunes. The hydraulic-jump zone of cyclic-step deposits comprises regularly spaced scours (0.2 to 2.6 m deep, 0.8 to 23 m wide), which are infilled by intraclast-rich conglomerates or pebbly sandstones and display normal coarse-tail grading or backsets. Laterally and vertically these deposits are associated with subhorizontally stratified, low-angle cross-stratified or sinusoidal stratified pebbly sandstones and sandstones (wavelength 0.5 to 18 m), interpreted as representing antidune deposits formed on the stoss-side of the cyclic steps during flow re-acceleration. The field examples indicate that so-called crudely or spaced stratified deposits may commonly represent antidune deposits with varying stratification styles controlled by the aggradation rate, grain-size distribution and amalgamation. The deposits of small-scale cyclic steps with superimposed antidunes form fining upwards successions with decreasing antidune wavelengths. Such cyclic step-antidune successions are the characteristic basal infill of channels, probably related to supercritical high-density turbidity flows triggered by retrogressive slope failures.

  13. Normalization of T2W-MRI prostate images using Rician a priori

    NASA Astrophysics Data System (ADS)

    Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Vilanova, Joan C.; Walker, Paul M.; Freixenet, Jordi; Meyer-Baese, Anke; Mériaudeau, Fabrice; Martí, Robert

    2016-03-01

    Prostate cancer is reported to be the second most frequently diagnosed cancer of men in the world. In practise, diagnosis can be affected by multiple factors which reduces the chance to detect the potential lesions. In the last decades, new imaging techniques mainly based on MRI are developed in conjunction with Computer-Aided Diagnosis (CAD) systems to help radiologists for such diagnosis. CAD systems are usually designed as a sequential process consisting of four stages: pre-processing, segmentation, registration and classification. As a pre-processing, image normalization is a critical and important step of the chain in order to design a robust classifier and overcome the inter-patients intensity variations. However, little attention has been dedicated to the normalization of T2W-Magnetic Resonance Imaging (MRI) prostate images. In this paper, we propose two methods to normalize T2W-MRI prostate images: (i) based on a Rician a priori and (ii) based on a Square-Root Slope Function (SRSF) representation which does not make any assumption regarding the Probability Density Function (PDF) of the data. A comparison with the state-of-the-art methods is also provided. The normalization of the data is assessed by comparing the alignment of the patient PDFs in both qualitative and quantitative manners. In both evaluation, the normalization using Rician a priori outperforms the other state-of-the-art methods.

  14. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  15. Massive, wide binaries as tracers of massive star formation

    NASA Astrophysics Data System (ADS)

    Griffiths, Daniel W.; Goodwin, Simon P.; Caballero-Nieves, Saida M.

    2018-05-01

    Massive stars can be found in wide (hundreds to thousands au) binaries with other massive stars. We use N-body simulations to show that any bound cluster should always have approximately one massive wide binary: one will probably form if none are present initially, and probably only one will survive if more than one is present initially. Therefore, any region that contains many massive wide binaries must have been composed of many individual subregions. Observations of Cyg OB2 show that the massive wide binary fraction is at least a half (38/74), which suggests that Cyg OB2 had at least 30 distinct massive star formation sites. This is further evidence that Cyg OB2 has always been a large, low-density association. That Cyg OB2 has a normal high-mass initial mass function (IMF) for its total mass suggests that however massive stars form, they `randomly sample' the IMF (as the massive stars did not `know' about each other).

  16. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  17. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  18. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  19. Spectral dimension controlling the decay of the quantum first-detection probability

    NASA Astrophysics Data System (ADS)

    Thiel, Felix; Kessler, David A.; Barkai, Eli

    2018-06-01

    We consider a quantum system that is initially localized at xin and that is repeatedly projectively probed with a fixed period τ at position xd. We ask for the probability Fn that the system is detected at xd for the very first time, where n is the number of detection attempts. We relate the asymptotic decay and oscillations of Fn with the system's energy spectrum, which is assumed to be absolutely continuous. In particular, Fn is determined by the Hamiltonian's measurement spectral density of states (MSDOS) f (E ) that is closely related to the density of energy states (DOS). We find that Fn decays like a power law whose exponent is determined by the power-law exponent dS of f (E ) around its singularities E*. Our findings are analogous to the classical first passage theory of random walks. In contrast to the classical case, the decay of Fn is accompanied by oscillations with frequencies that are determined by the singularities E*. This gives rise to critical detection periods τc at which the oscillations disappear. In the ordinary case dS can be identified with the spectral dimension associated with the DOS. Furthermore, the singularities E* are the van Hove singularities of the DOS in this case. We find that the asymptotic statistics of Fn depend crucially on the initial and detection state and can be wildly different for out-of-the-ordinary states, which is in sharp contrast to the classical theory. The properties of the first-detection probabilities can alternatively be derived from the transition amplitudes. All our results are confirmed by numerical simulations of the tight-binding model, and of a free particle in continuous space both with a normal and with an anomalous dispersion relation. We provide explicit asymptotic formulas for the first-detection probability in these models.

  20. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    PubMed

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  1. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  2. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  3. Quantitative consensus of supervised learners for diffuse lung parenchymal HRCT patterns

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2013-03-01

    Automated lung parenchymal classification usually relies on supervised learning of expert chosen regions representative of the visually differentiable HRCT patterns specific to different pathologies (eg. emphysema, ground glass, honey combing, reticular and normal). Considering the elusiveness of a single most discriminating similarity measure, a plurality of weak learners can be combined to improve the machine learnability. Though a number of quantitative combination strategies exist, their efficacy is data and domain dependent. In this paper, we investigate multiple (N=12) quantitative consensus approaches to combine the clusters obtained with multiple (n=33) probability density-based similarity measures. Our study shows that hypergraph based meta-clustering and probabilistic clustering provides optimal expert-metric agreement.

  4. Optimal detection pinhole for lowering speckle noise while maintaining adequate optical sectioning in confocal reflectance microscopes

    PubMed Central

    Rajadhyaksha, Milind

    2012-01-01

    Abstract. Coherent speckle influences the resulting image when narrow spectral line-width and single spatial mode illumination are used, though these are the same light-source properties that provide the best radiance-to-cost ratio. However, a suitable size of the detection pinhole can be chosen to maintain adequate optical sectioning while making the probability density of the speckle noise more normal and reducing its effect. The result is a qualitatively better image with improved contrast, which is easier to read. With theoretical statistics and experimental results, we show that the detection pinhole size is a fundamental parameter for designing imaging systems for use in turbid media. PMID:23224184

  5. Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data

    NASA Technical Reports Server (NTRS)

    Lin, G. C.; Minter, T. C.

    1976-01-01

    Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.

  6. The Hypercalciurias CAUSES, PARATHYROID FUNCTIONS, AND DIAGNOSTIC CRITERIA

    PubMed Central

    Pak, Charles Y. C.; Ohata, Masahiro; Lawrence, E. Clint; Snyder, W.

    1974-01-01

    The causes for the hypercalciuria and diagnostic criteria for the various forms of hypercalciuria were sought in 56 patients with hypercalcemia or nephrolithiasis (Ca stones), by a careful assessment of parathyroid function and calcium metabolism. A study protocol for the evaluation of hypercalciuria, based on a constant liquid synthetic diet, was developed. In 26 cases of primary hyperparathyroidism, characteristic features were: hypercalcemia, high urinary cyclic AMP (cAMP, 8.58±3.63 SD μmol/g creatinine; normal, 4.02±0.70 μmol/g creatinine), high immunoreactive serum parathyroid hormone (PTH), hypercalciuria, the urinary Ca exceeding absorbed Ca from intestinal tract (CaA), high fasting urinary Ca (0.2 mg/mg creatinine or greater), and low bone density by 125I photon absorption. The results suggest that hypercalciuria is partly secondary to an excessive skeletal resorption (resorptive hypercalciuria). The 22 cases with renal stones had normocalcemia, hypercalciuria, intestinal hyperabsorption of calcium, normal or low serum PTH and urinary cAMP, normal fasting urinary Ca, and normal bone density. Since their CaA exceeded urinary Ca, the hypercalciuria probably resulted from an intestinal hyperabsorption of Ca (absorptive hypercalciuria). The primacy of intestinal Ca hyperabsorption was confirmed by responses to Ca load and deprivation under a metabolic dietary regimen. During a Ca load of 1,700 mg/day, there was an exaggerated increase in the renal excretion of Ca and a suppression of cAMP excretion. The urinary Ca of 453±154 SD mg/day was significantly higher than the control group's 211±42 mg/day. The urinary cAMP of 2.26±0.56 μmol/g creatinine was significantly lower than in the control group. In contrast, when the intestinal absorption of calcium was limited by cellulose phosphate, the hypercalciuria was corrected and the suppressed renal excretion of cAMP returned towards normal. Two cases with renal stones had normocalcemia, hypercalciuria, and high urinary cAMP or serum PTH. Since CaA was less than urinary Ca, the hypercalciuria may have been secondary to an impaired renal tubular reabsorption of Ca (renal hypercalciuria). Six cases with renal stones had normal values of serum Ca, urinary Ca, urinary cAMP, and serum PTH (normocalciuric nephrolithiasis). Their CaA exceeded urinary Ca, and fasting urinary Ca and bone density were normal. The results support the proposed mechanisms for the hypercalciuria and provide reliable diagnostic criteria for the various forms of hypercalciuria. PMID:4367891

  7. Zealotry effects on opinion dynamics in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Klamser, Pascal P.; Wiedermann, Marc; Donges, Jonathan F.; Donner, Reik V.

    2017-11-01

    The adaptive voter model has been widely studied as a conceptual model for opinion formation processes on time-evolving social networks. Past studies on the effect of zealots, i.e., nodes aiming to spread their fixed opinion throughout the system, only considered the voter model on a static network. Here we extend the study of zealotry to the case of an adaptive network topology co-evolving with the state of the nodes and investigate opinion spreading induced by zealots depending on their initial density and connectedness. Numerical simulations reveal that below the fragmentation threshold a low density of zealots is sufficient to spread their opinion to the whole network. Beyond the transition point, zealots must exhibit an increased degree as compared to ordinary nodes for an efficient spreading of their opinion. We verify the numerical findings using a mean-field approximation of the model yielding a low-dimensional set of coupled ordinary differential equations. Our results imply that the spreading of the zealots' opinion in the adaptive voter model is strongly dependent on the link rewiring probability and the average degree of normal nodes in comparison with that of the zealots. In order to avoid a complete dominance of the zealots' opinion, there are two possible strategies for the remaining nodes: adjusting the probability of rewiring and/or the number of connections with other nodes, respectively.

  8. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  10. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  11. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  12. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  13. Adiabatic elimination of inertia of the stochastic microswimmer driven by α -stable noise

    NASA Astrophysics Data System (ADS)

    Noetel, Joerg; Sokolov, Igor M.; Schimansky-Geier, Lutz

    2017-10-01

    We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α -stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τϕ, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t ≫τϕ , is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.

  14. Adiabatic elimination of inertia of the stochastic microswimmer driven by α-stable noise.

    PubMed

    Noetel, Joerg; Sokolov, Igor M; Schimansky-Geier, Lutz

    2017-10-01

    We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α-stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τ_{ϕ}, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t≫τ_{ϕ}, is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.

  15. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  16. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  17. From Fractal Trees to Deltaic Networks

    NASA Astrophysics Data System (ADS)

    Cazanacli, D.; Wolinsky, M. A.; Sylvester, Z.; Cantelli, A.; Paola, C.

    2013-12-01

    Geometric networks that capture many aspects of natural deltas can be constructed from simple concepts from graph theory and normal probability distributions. Fractal trees with symmetrical geometries are the result of replicating two simple geometric elements, line segments whose lengths decrease and bifurcation angles that are commonly held constant. Branches could also have a thickness, which in the case of natural distributary systems is the equivalent of channel width. In river- or wave-dominated natural deltas, the channel width is a function of discharge. When normal variations around the mean values for length, bifurcating angles, and discharge are applied, along with either pruning of 'clashing' branches or merging (equivalent to channel confluence), fractal trees start resembling natural deltaic networks, except that the resulting channels are unnaturally straight. Introducing a bifurcation probability fewer, naturally curved channels are obtained. If there is no bifurcation, the direction of each new segment depends on the direction the previous segment upstream (correlated random walk) and, to a lesser extent, on a general direction of growth (directional bias). When bifurcation occurs, the resulting two directions also depend on the bifurcation angle and the discharge split proportions, with the dominant branch following the direction of the upstream parent channel closely. The bifurcation probability controls the channel density and, in conjunction with the variability of the directional angles, the overall curvature of the channels. The growth of the network in effect is associated with net delta progradation. The overall shape and shape evolution of the delta depend mainly on the bifurcation angle average size and angle variability coupled with the degree of dominant direction dependency (bias). The proposed algorithm demonstrates how, based on only a few simple rules, a wide variety of channel networks resembling natural deltas, can be replicated. Network Example

  18. Probabilistic images (PBIS): A concise image representation technique for multiple parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, L.C.; Yeh, S.H.; Chen, Z.

    1984-01-01

    Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalitiesmore » is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method.« less

  19. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  20. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  1. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20.

    PubMed

    Michael, J Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism [Stewart (1976). Acta Cryst. A32, 565-574; Hansen & Coppens (1978). Acta Cryst. A34, 909-921] in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens [Acta Cryst. (1988), A44, 6-7]. It was shown that the analytical form for normalization coefficients is available primarily for l ≤ 4 [Hansen & Coppens, 1978; Paturle & Coppens, 1988; Coppens (1992). International Tables for Crystallography, Vol. B, Reciprocal space, 1st ed., edited by U. Shmueli, ch. 1.2. Dordrecht: Kluwer Academic Publishers; Coppens (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press]. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7 (Paturle & Coppens, 1988). In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle-Coppens (Paturle & Coppens, 1988) method in the Wolfram Mathematica software to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.

  2. Space charge in nanostructure resonances

    NASA Astrophysics Data System (ADS)

    Price, Peter J.

    1996-10-01

    In quantum ballistic propagation of electrons through a variety of nanostructures, resonance in the energy-dependent transmission and reflection probabilities generically is associated with (1) a quasi-level with a decay lifetime, and (2) a bulge in electron density within the structure. It can be shown that, to a good approximation, a simple formula in all cases connects the density of states for the latter to the energy dependence of the phase angles of the eigen values of the S-matrix governing the propagation. For both the Lorentzian resonances (normal or inverted) and for the Fano-type resonances, as a consequence of this eigen value formula, the space charge due to filled states over the energy range of a resonance is just equal (for each spin state) to one electron charge. The Coulomb interaction within this space charge is known to 'distort' the electrical characteristics of resonant nanostructures. In these systems, however, the exchange effect should effectively cancel the interaction between states with parallel spins, leaving only the anti-parallel spin contribution.

  3. Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Ghosn, L. J.

    2007-01-01

    A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.

  4. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  5. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  6. From overload to failure: what happens inside the myocyte.

    PubMed

    Harding, S E; Davia, K; Davies, C H; del Monte, F; Money-Kyrle, A R; Poole-Wilson, P A

    1998-08-01

    To determine whether there is a defect in the surviving muscle cells of the failing human heart, studies have been performed on individual myocytes isolated from normal and failing human myocardium. Myocytes from the failing ventricle contract and relax more slowly, and have a reduced contraction amplitude at physiological (but not low) stimulation frequencies. Slow relaxation is seen irrespective of the aetiology of the heart disease studied, and is more pronounced in myocytes from hypertrophied ventricles. Myocytes from hypertrophied ventricles are larger than normal, but the relaxation deficit is independent of cell size. Beta-adrenoceptor desensitization is evident in myocytes and it varies according to the severity of disease and with the age of the patient. Action potentials are longer in myocytes from failing human heart, probably because of an alteration in K+ current density. Many of the functional changes identified in failing human myocardium are seen at the level of the single cardiac myocyte, which implies that pharmacological or genetic manipulation of surviving cells is a logical therapeutic strategy.

  7. Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio

    1993-02-01

    The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.

  8. Quantum transport in new two-dimensional heterostructures: Thin films of topological insulators, phosphorene

    NASA Astrophysics Data System (ADS)

    Majidi, Leyla; Zare, Moslem; Asgari, Reza

    2018-06-01

    The unusual features of the charge and spin transport characteristics are investigated in new two-dimensional heterostructures. Intraband specular Andreev reflection is realized in a topological insulator thin film normal/superconducting junction in the presence of a gate electric field. Perfect specular electron-hole conversion is shown for different excitation energy values in a wide experimentally available range of the electric field and also for all angles of incidence when the excitation energy has a particular value. It is further demonstrated that the transmission probabilities of the incoming electrons from different spin subbands to the monolayer phosphorene ferromagnetic/normal/ferromagnetic (F/N/F) hybrid structure have different behavior with the angle of incidence and perfect transmission occurs at defined angles of incidence to the proposed structure with different length of the N region, and different alignments of magnetization vectors. Moreover, the sign change of the spin-current density is demonstrated by tuning the chemical potential and exchange field of the F region.

  9. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  10. [Characteristics of wheat powdery mildew growth along and across the longitudinal axis of a leaf under the action of exogenous zeatin].

    PubMed

    Riabchenko, A S; Avetisian, T V; Babosha, A V

    2009-01-01

    Scanning electronic microscopy was used to investigate the regularities of growth direction of infectious structures and colonies of the agent of powdery mildew of wheat Erysiphe graminis f. sp. tritici. The growth of appressoria with normal morphology in wheat leaves occurs predominantly along the long axis of the cell. Most anomalous appressoria grow perpendicularly. Treatment with zeatin changes the ratio of the directions of growth of normal appressoria and hyphae of the colonies. The dependence of these parameters and of the surficial density of colonies on the concentration of phytohormone is monophasic. The hypothesis is suggested that the strategy of selection of the direction of growth of infectious structures on leaves with an anisotropic surface depends on the most probable position of the receptor cell and the action of cytokinins on their participation in redistribution of nutrients between the infected and noninfected cells of the host plant.

  11. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  12. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  13. JDINAC: joint density-based non-parametric differential interaction network analysis and classification using high-dimensional sparse omics data.

    PubMed

    Ji, Jiadong; He, Di; Feng, Yang; He, Yong; Xue, Fuzhong; Xie, Lei

    2017-10-01

    A complex disease is usually driven by a number of genes interwoven into networks, rather than a single gene product. Network comparison or differential network analysis has become an important means of revealing the underlying mechanism of pathogenesis and identifying clinical biomarkers for disease classification. Most studies, however, are limited to network correlations that mainly capture the linear relationship among genes, or rely on the assumption of a parametric probability distribution of gene measurements. They are restrictive in real application. We propose a new Joint density based non-parametric Differential Interaction Network Analysis and Classification (JDINAC) method to identify differential interaction patterns of network activation between two groups. At the same time, JDINAC uses the network biomarkers to build a classification model. The novelty of JDINAC lies in its potential to capture non-linear relations between molecular interactions using high-dimensional sparse data as well as to adjust confounding factors, without the need of the assumption of a parametric probability distribution of gene measurements. Simulation studies demonstrate that JDINAC provides more accurate differential network estimation and lower classification error than that achieved by other state-of-the-art methods. We apply JDINAC to a Breast Invasive Carcinoma dataset, which includes 114 patients who have both tumor and matched normal samples. The hub genes and differential interaction patterns identified were consistent with existing experimental studies. Furthermore, JDINAC discriminated the tumor and normal sample with high accuracy by virtue of the identified biomarkers. JDINAC provides a general framework for feature selection and classification using high-dimensional sparse omics data. R scripts available at https://github.com/jijiadong/JDINAC. lxie@iscb.org. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  15. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  16. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  17. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  18. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  19. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  20. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  1. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  2. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  3. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  4. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  5. Evaluating Approaches to Rendering Braille Text on a High-Density Pin Display.

    PubMed

    Morash, Valerie S; Russomanno, Alexander; Gillespie, R Brent; OModhrain, Sile

    2017-10-13

    Refreshable displays for tactile graphics are typically composed of pins that have smaller diameters and spacing than standard braille dots. We investigated configurations of high-density pins to form braille text on such displays using non-refreshable stimuli produced with a 3D printer. Normal dot braille (diameter 1.5 mm) was compared to high-density dot braille (diameter 0.75 mm) wherein each normal dot was rendered by high-density simulated pins alone or in a cluster of pins configured in a diamond, X, or square; and to "blobs" that could result from covering normal braille and high-density multi-pin configurations with a thin membrane. Twelve blind participants read MNREAD sentences displayed in these conditions. For high-density simulated pins, single pins were as quickly and easily read as normal braille, but diamond, X, and square multi-pin configurations were slower and/or harder to read than normal braille. We therefore conclude that as long as center-to-center dot spacing and dot placement is maintained, the dot diameter may be open to variability for rendering braille on a high density tactile display.

  6. Time-evolution of uniform momentum zones in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Laskari, Angeliki; Hearst, R. Jason; de Kat, Roeland; Ganapathisubramani, Bharathram

    2016-11-01

    Time-resolved planar particle image velocimetry (PIV) is used to analyse the organisation and evolution of uniform momentum zones (UMZs) in a turbulent boundary layer. Experiments were performed in a recirculating water tunnel on a streamwise-wall-normal plane extending approximately 0 . 5 δ × 1 . 8 δ , in x and y, respectively. In total 400,000 images were captured and for each of the resulting velocity fields, local peaks in the probability density distribution of the streamwise velocity were detected, indicating the instantaneous presence of UMZs throughout the boundary layer. The main characteristics of these zones are outlined and more specifically their velocity range and wall-normal extent. The variation of these characteristics with wall normal distance and total number of zones are also discussed. Exploiting the time information available, time-scales of zones that have a substantial coherence in time are analysed and results show that the zones' lifetime is dependent on both their momentum deficit level and the total number of zones present. Conditional averaging of the flow statistics seems to further indicate that a large number of zones is the result of a wall-dominant mechanism, while the opposite implies an outer-layer dominance.

  7. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  8. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  9. Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail

    2017-08-01

    Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.

  10. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  11. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  12. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  13. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  14. Biochemical correlates in an animal model of depression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J.O.

    1986-01-01

    A valid animal model of depression was used to explore specific adrenergic receptor differences between rats exhibiting aberrant behavior and control groups. Preliminary experiments revealed a distinct upregulation of hippocampal beta-receptors (as compared to other brain regions) in those animals acquiring a response deficit as a result of exposure to inescapable footshock. Concurrent studies using standard receptor binding techniques showed no large changes in the density of alpha-adrenergic, serotonergic, or dopaminergic receptor densities. This led to the hypothesis that the hippocampal beta-receptor in responses deficient animals could be correlated with the behavioral changes seen after exposure to the aversive stimulus.more » Normalization of the behavior through the administration of antidepressants could be expected to reverse the biochemical changes if these are related to the mechanism of action of antidepressant drugs. This study makes three important points: (1) there is a relevant biochemical change in the hippocampus of response deficient rats which occurs in parallel to a well-defined behavior, (2) the biochemical and behavioral changes are normalized by antidepressant treatments exhibiting both serotonergic and adrenergic mechanisms of action, and (3) the mode of action of antidepressants in this model is probably a combination of serotonergic and adrenergic influences modulating the hippocampal beta-receptor. These results are discussed in relation to anatomical and biochemical aspects of antidepressant action.« less

  15. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  16. Diffusion of finite-sized hard-core interacting particles in a one-dimensional box: Tagged particle dynamics.

    PubMed

    Lizana, L; Ambjörnsson, T

    2009-11-01

    We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time ttaucoll but times smaller than the equilibrium time ttaue , rhoT(yT,t|yT,0) approaches a polynomial-type equilibrium probability density function. Notably, only regimes (A) and (B) are found in the previously considered infinite systems.

  17. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  18. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  19. The Adaptation of the Moth Pheromone Receptor Neuron to its Natural Stimulus

    NASA Astrophysics Data System (ADS)

    Kostal, Lubomir; Lansky, Petr; Rospars, Jean-Pierre

    2008-07-01

    We analyze the first phase of information transduction in the model of the olfactory receptor neuron of the male moth Antheraea polyphemus. We predict such stimulus characteristics that enable the system to perform optimally, i.e., to transfer as much information as possible. Few a priori constraints on the nature of stimulus and stimulus-to-signal transduction are assumed. The results are given in terms of stimulus distributions and intermittency factors which makes direct comparison with experimental data possible. Optimal stimulus is approximatelly described by exponential or log-normal probability density function which is in agreement with experiment and the predicted intermittency factors fall within the lowest range of observed values. The results are discussed with respect to electroantennogram measurements and behavioral observations.

  20. Improving chemical species tomography of turbulent flows using covariance estimation.

    PubMed

    Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J

    2017-05-01

    Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.

  1. Monte Carlo simulation of wave sensing with a short pulse radar

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Davisson, L. D.; Kutz, R. L.

    1977-01-01

    A Monte Carlo simulation is used to study the ocean wave sensing potential of a radar which scatters short pulses at small off-nadir angles. In the simulation, realizations of a random surface are created commensurate with an assigned probability density and power spectrum. Then the signal scattered back to the radar is computed for each realization using a physical optics analysis which takes wavefront curvature and finite radar-to-surface distance into account. In the case of a Pierson-Moskowitz spectrum and a normally distributed surface, reasonable assumptions for a fully developed sea, it has been found that the cumulative distribution of time intervals between peaks in the scattered power provides a measure of surface roughness. This observation is supported by experiments.

  2. Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.

    PubMed

    Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L

    2014-08-13

    The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.

  3. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  4. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  5. Active faulting in the central Betic Cordillera (Spain): Palaeoseismological constraint of the surface-rupturing history of the Baza Fault (Central Betic Cordillera, Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Castro, J.; Martin-Rojas, I.; Medina-Cascales, I.; García-Tortosa, F. J.; Alfaro, P.; Insua-Arévalo, J. M.

    2018-06-01

    This paper on the Baza Fault provides the first palaeoseismic data from trenches in the central sector of the Betic Cordillera (S Spain), one of the most tectonically active areas of the Iberian Peninsula. With the palaeoseismological data we constructed time-stratigraphic OxCal models that yield probability density functions (PDFs) of individual palaeoseismic event timing. We analysed PDF overlap to quantitatively correlate the walls and site events into a single earthquake chronology. We assembled a surface-rupturing history of the Baza Fault for the last ca. 45,000 years. We postulated six alternative surface rupturing histories including 8-9 fault-wide earthquakes. We calculated fault-wide earthquake recurrence intervals using Monte Carlo. This analysis yielded a 4750-5150 yr recurrence interval. Finally, compared our results with the results from empirical relationships. Our results will provide a basis for future analyses of more of other active normal faults in this region. Moreover, our results will be essential for improving earthquake-probability assessments in Spain, where palaeoseismic data are scarce.

  6. Occult Breast Cancer: Scintimammography with High-Resolution Breast-specific Gamma Camera in Women at High Risk for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rachel F. Brem; Jocelyn A. Rapelyea; , Gilat Zisman

    2005-08-01

    To prospectively evaluate a high-resolution breast-specific gamma camera for depicting occult breast cancer in women at high risk for breast cancer but with normal mammographic and physical examination findings. MATERIALS AND METHODS: Institutional Review Board approval and informed consent were obtained. The study was HIPAA compliant. Ninety-four high-risk women (age range, 36-78 years; mean, 55 years) with normal mammographic (Breast Imaging Reporting and Data System [BI-RADS] 1 or 2) and physical examination findings were evaluated with scintimammography. After injection with 25-30 mCi (925-1110 MBq) of technetium 99m sestamibi, patients were imaged with a high-resolution small-field-of-view breast-specific gamma camera in craniocaudalmore » and mediolateral oblique projections. Scintimammograms were prospectively classified according to focal radiotracer uptake as normal (score of 1), with no focal or diffuse uptake; benign (score of 2), with minimal patchy uptake; probably benign (score of 3), with scattered patchy uptake; probably abnormal (score of 4), with mild focal radiotracer uptake; and abnormal (score of 5), with marked focal radiotracer uptake. Mammographic breast density was categorized according to BI-RADS criteria. Patients with normal scintimammograms (scores of 1, 2, or 3) were followed up for 1 year with an annual mammogram, physical examination, and repeat scintimammography. Patients with abnormal scintimammograms (scores of 4 or 5) underwent ultrasonography (US), and those with focal hypoechoic lesions underwent biopsy. If no lesion was found during US, patients were followed up with scintimammography. Specific pathologic findings were compared with scintimammographic findings. RESULTS: Of 94 women, 78 (83%) had normal scintimammograms (score of 1, 2, or 3) at initial examination and 16 (17%) had abnormal scintimammograms (score of 4 or 5). Fourteen (88%) of the 16 patients had either benign findings at biopsy or no focal abnormality at US; in two (12%) patients, invasive carcinoma was diagnosed at US-guided biopsy (9 mm each at pathologic examination). CONCLUSION: High-resolution breast-specific scintimammography can depict small (<1-cm), mammographically occult, nonpalpable lesions in women at increased risk for breast cancer not otherwise identified at mammography or physical examination.« less

  7. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  8. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  9. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  10. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  11. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  12. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  13. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  14. Quantitative evaluation of Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Duchesne, S.; Frisoni, G. B.

    2009-02-01

    We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots, Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative biomarker in AD would act as an important surrogate marker of disease status and progression.

  15. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  16. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  17. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  18. The physics of bacterial decision making.

    PubMed

    Ben-Jacob, Eshel; Lu, Mingyang; Schultz, Daniel; Onuchic, Jose' N

    2014-01-01

    The choice that bacteria make between sporulation and competence when subjected to stress provides a prototypical example of collective cell fate determination that is stochastic on the individual cell level, yet predictable (deterministic) on the population level. This collective decision is performed by an elaborated gene network. Considerable effort has been devoted to simplify its complexity by taking physics approaches to untangle the basic functional modules that are integrated to form the complete network: (1) A stochastic switch whose transition probability is controlled by two order parameters-population density and internal/external stress. (2) An adaptable timer whose clock rate is normalized by the same two previous order parameters. (3) Sensing units which measure population density and external stress. (4) A communication module that exchanges information about the cells' internal stress levels. (5) An oscillating gate of the stochastic switch which is regulated by the timer. The unique circuit architecture of the gate allows special dynamics and noise management features. The gate opens a window of opportunity in time for competence transitions, during which the circuit generates oscillations that are translated into a chain of short intervals with high transition probability. In addition, the unique architecture of the gate allows filtering of external noise and robustness against variations in circuit parameters and internal noise. We illustrate that a physics approach can be very valuable in investigating the decision process and in identifying its general principles. We also show that both cell-cell variability and noise have important functional roles in the collectively controlled individual decisions.

  19. The physics of bacterial decision making

    PubMed Central

    Ben-Jacob, Eshel; Lu, Mingyang; Schultz, Daniel; Onuchic, Jose' N.

    2014-01-01

    The choice that bacteria make between sporulation and competence when subjected to stress provides a prototypical example of collective cell fate determination that is stochastic on the individual cell level, yet predictable (deterministic) on the population level. This collective decision is performed by an elaborated gene network. Considerable effort has been devoted to simplify its complexity by taking physics approaches to untangle the basic functional modules that are integrated to form the complete network: (1) A stochastic switch whose transition probability is controlled by two order parameters—population density and internal/external stress. (2) An adaptable timer whose clock rate is normalized by the same two previous order parameters. (3) Sensing units which measure population density and external stress. (4) A communication module that exchanges information about the cells' internal stress levels. (5) An oscillating gate of the stochastic switch which is regulated by the timer. The unique circuit architecture of the gate allows special dynamics and noise management features. The gate opens a window of opportunity in time for competence transitions, during which the circuit generates oscillations that are translated into a chain of short intervals with high transition probability. In addition, the unique architecture of the gate allows filtering of external noise and robustness against variations in circuit parameters and internal noise. We illustrate that a physics approach can be very valuable in investigating the decision process and in identifying its general principles. We also show that both cell-cell variability and noise have important functional roles in the collectively controlled individual decisions. PMID:25401094

  20. Quantification of mammographic masking risk with volumetric breast density maps: how to select women for supplemental screening

    NASA Astrophysics Data System (ADS)

    Holland, Katharina; van Gils, Carla H.; Wanders, Johanna OP; Mann, Ritse M.; Karssemeijer, Nico

    2016-03-01

    The sensitivity of mammograms is low for women with dense breasts, since cancers may be masked by dense tissue. In this study, we investigated methods to identify women with density patterns associated with a high masking risk. Risk measures are derived from volumetric breast density maps. We used the last negative screening mammograms of 93 women who subsequently presented with an interval cancer (IC), and, as controls, 930 randomly selected normal screening exams from women without cancer. Volumetric breast density maps were computed from the mammograms, which provide the dense tissue thickness at each location. These were used to compute absolute and percentage glandular tissue volume. We modeled the masking risk for each pixel location using the absolute and percentage dense tissue thickness and we investigated the effect of taking the cancer location probability distribution (CLPD) into account. For each method, we selected cases with the highest masking measure (by thresholding) and computed the fraction of ICs as a function of the fraction of controls selected. The latter can be interpreted as the negative supplemental screening rate (NSSR). Between the models, when incorporating CLPD, no significant differences were found. In general, the methods performed better when CLPD was included. At higher NSSRs some of the investigated masking measures had a significantly higher performance than volumetric breast density. These measures may therefore serve as an alternative to identify women with a high risk for a masked cancer.

  1. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  2. Reduced vertebral bone density in hypercalciuric nephrolithiasis

    NASA Technical Reports Server (NTRS)

    Pietschmann, F.; Breslau, N. A.; Pak, C. Y.

    1992-01-01

    Dual-energy x-ray absorptiometry and single-photon absorptiometry were used to determine bone density at the lumbar spine and radial shaft in 62 patients with absorptive hypercalciuria, 27 patients with fasting hypercalciuria, and 31 nonhypercalciuric stone formers. Lumbar bone density was significantly lower in patients with absorptive (-10%) as well as in those with fasting hypercalciuria (-12%), with 74 and 92% of patients displaying values below the normal mean, whereas only 48% of the nonhypercalciuric stone formers had bone density values below the normal mean. In contrast, radial bone density was similar in all three groups of renal stone formers investigated. The comparison of urinary chemistry in patients with absorptive hypercalciuria and low normal bone density compared to those with high normal bone density showed a significantly increased 24 h urinary calcium excretion on random diet and a trend toward a higher 24 h urinary uric acid excretion and a higher body mass index in patients with low normal bone density. Moreover, among the patients with absorptive hypercalciuria we found a statistically significant correlation between the spinal bone density and the 24 h sodium and sulfate excretion and the urinary pH. These results gave evidence for an additional role of environmental factors (sodium and animal proteins) in the pathogenesis of bone loss in absorptive hypercalciuria. In conclusion, our data suggest an osteopenia of trabecular-rich bone tissues in patients with fasting and absorptive hypercalciurias.

  3. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  4. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  5. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  6. The risk of pedestrian collisions with peripheral visual field loss.

    PubMed

    Peli, Eli; Apfelbaum, Henry; Berson, Eliot L; Goldstein, Robert B

    2016-12-01

    Patients with peripheral field loss complain of colliding with other pedestrians in open-space environments such as shopping malls. Field expansion devices (e.g., prisms) can create artificial peripheral islands of vision. We investigated the visual angle at which these islands can be most effective for avoiding pedestrian collisions, by modeling the collision risk density as a function of bearing angle of pedestrians relative to the patient. Pedestrians at all possible locations were assumed to be moving in all directions with equal probability within a reasonable range of walking speeds. The risk density was found to be highly anisotropic. It peaked at ≈45° eccentricity. Increasing pedestrian speed range shifted the risk to higher eccentricities. The risk density is independent of time to collision. The model results were compared to the binocular residual peripheral island locations of 42 patients with forms of retinitis pigmentosa. The natural residual island prevalence also peaked nasally at about 45° but temporally at about 75°. This asymmetry resulted in a complementary coverage of the binocular field of view. Natural residual binocular island eccentricities seem well matched to the collision-risk density function, optimizing detection of other walking pedestrians (nasally) and of faster hazards (temporally). Field expansion prism devices will be most effective if they can create artificial peripheral islands at about 45° eccentricities. The collision risk and residual island findings raise interesting questions about normal visual development.

  7. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  8. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  9. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  10. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  11. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  12. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  13. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  14. Statistics of strain rates and surface density function in a flame-resolved high-fidelity simulation of a turbulent premixed bluff body burner

    NASA Astrophysics Data System (ADS)

    Sandeep, Anurag; Proch, Fabian; Kempf, Andreas M.; Chakraborty, Nilanjan

    2018-06-01

    The statistical behavior of the surface density function (SDF, the magnitude of the reaction progress variable gradient) and the strain rates, which govern the evolution of the SDF, have been analyzed using a three-dimensional flame-resolved simulation database of a turbulent lean premixed methane-air flame in a bluff-body configuration. It has been found that the turbulence intensity increases with the distance from the burner, changing the flame curvature distribution and increasing the probability of the negative curvature in the downstream direction. The curvature dependences of dilatation rate ∇ṡu → and displacement speed Sd give rise to variations of these quantities in the axial direction. These variations affect the nature of the alignment between the progress variable gradient and the local principal strain rates, which in turn affects the mean flame normal strain rate, which assumes positive values close to the burner but increasingly becomes negative as the effect of turbulence increases with the axial distance from the burner exit. The axial distance dependences of the curvature and displacement speed also induce a considerable variation in the mean value of the curvature stretch. The axial distance dependences of the dilatation rate and flame normal strain rate govern the behavior of the flame tangential strain rate, and its mean value increases in the downstream direction. The current analysis indicates that the statistical behaviors of different strain rates and displacement speed and their curvature dependences need to be included in the modeling of flame surface density and scalar dissipation rate in order to accurately capture their local behaviors.

  15. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  16. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  17. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  18. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  19. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  20. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  1. Cosmological Constraints from Galaxy Cluster Velocity Statistics

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Suman; Kosowsky, Arthur

    2007-04-01

    Future microwave sky surveys will have the sensitivity to detect the kinematic Sunyaev-Zeldovich signal from moving galaxy clusters, thus providing a direct measurement of their line-of-sight peculiar velocity. We show that cluster peculiar velocity statistics applied to foreseeable surveys will put significant constraints on fundamental cosmological parameters. We consider three statistical quantities that can be constructed from a cluster peculiar velocity catalog: the probability density function, the mean pairwise streaming velocity, and the pairwise velocity dispersion. These quantities are applied to an envisioned data set that measures line-of-sight cluster velocities with normal errors of 100 km s-1 for all clusters with masses larger than 1014 Msolar over a sky area of up to 5000 deg2. A simple Fisher matrix analysis of this survey shows that the normalization of the matter power spectrum and the dark energy equation of state can be constrained to better than 10%, and that the Hubble constant and the primordial power spectrum index can be constrained to a few percent, independent of any other cosmological observations. We also find that the current constraint on the power spectrum normalization can be improved by more than a factor of 2 using data from a 400 deg2 survey and WMAP third-year priors. We also show how the constraints on cosmological parameters change if cluster velocities are measured with normal errors of 300 km s-1.

  2. A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.

    PubMed

    Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B

    2016-10-01

    As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and Patient 1 (joint incubation and age at infection × 1980-96). For Patient 3, relative probabilities for Saudi Arabia were not as distinct from those for other countries using the lifetime interval: 0.394, 0.360 and 0.378, respectively, for incubation period, age at infection and jointly for incubation and age at infection. However, for this patient Saudi Arabia clearly ranked highest within the 1980-96 period: 0.859, 0.871 and 0.865, respectively, for incubation period, age at infection and jointly for incubation and age at infection. These findings support the hypothesis that human infection with bovine spongiform encephalopathy occurred in Saudi Arabia. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Public Health.

  3. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  4. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  5. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  6. Autonomous learning derived from experimental modeling of physical laws.

    PubMed

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  8. Probability of regenerating a normal limb after bite injury in the Mexican axolotl (Ambystoma mexicanum)

    PubMed Central

    Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L.

    2014-01-01

    Abstract Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls (Ambystoma mexicanum) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary‐housed males and group‐housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury probably explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury. PMID:25745564

  9. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  10. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  11. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  12. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  13. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  14. Connections between density, wall-normal velocity, and coherent structure in a heated turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Saxton-Fox, Theresa; Gordeyev, Stanislav; Smith, Adam; McKeon, Beverley

    2015-11-01

    Strong density gradients associated with turbulent structure were measured in a mildly heated turbulent boundary layer using an optical sensor (Malley probe). The Malley probe measured index of refraction gradients integrated along the wall-normal direction, which, due to the proportionality of index of refraction and density in air, was equivalently an integral measure of density gradients. The integral output was observed to be dominated by strong, localized density gradients. Conditional averaging and Pearson correlations identified connections between the streamwise gradient of density and the streamwise gradient of wall-normal velocity. The trends were suggestive of a process of pick-up and transport of heat away from the wall. Additionally, by considering the density field as a passive marker of structure, the role of the wall-normal velocity in shaping turbulent structure in a sheared flow was examined. Connections were developed between sharp gradients in the density and flow fields and strong vertical velocity fluctuations. This research is made possible by the Department of Defense through the National Defense & Engineering Graduate Fellowship (NDSEG) Program and by the Air Force Office of Scientific Research Grant # FA9550-12-1-0060.

  15. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  16. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  17. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  18. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  19. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  20. Spiroplasma infection causes either early or late male killing in Drosophila, depending on maternal host age

    NASA Astrophysics Data System (ADS)

    Kageyama, Daisuke; Anbutsu, Hisashi; Shimada, Masakazu; Fukatsu, Takema

    2007-04-01

    Symbiont-induced male-killing phenotypes have been found in a variety of insects. Conventionally, these phenotypes have been divided into two categories according to the timing of action: early male killing at embryonic stages and late male killing at late larval stages. In Drosophila species, endosymbiotic bacteria of the genus Spiroplasma have been known to cause early male killing. Here, we report that a spiroplasma strain normally causing early male killing also induces late male killing depending on the maternal host age: male-specific mortality of larvae and pupae was more frequently observed in the offspring of young females. As the lowest spiroplasma density and occasional male production were also associated with newly emerged females, we proposed the density-dependent hypothesis for the expression of early and late male-killing phenotypes. Our finding suggested that (1) early and late male-killing phenotypes can be caused by the same symbiont and probably by the same mechanism; (2) late male killing may occur as an attenuated expression of early male killing; (3) expression of early and late male-killing phenotypes may be dependent on the symbiont density, and thus, could potentially be affected by the host immunity and regulation; and (4) early male killing and late male killing could be alternative strategies adopted by microbial reproductive manipulators.

  1. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  2. A two-scale scattering model with application to the JONSWAP '75 aircraft microwave scatterometer experiment

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1977-01-01

    The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.

  3. Terrestrial Ozone Depletion Due to a Milky Way Gamma-Ray Burst

    NASA Technical Reports Server (NTRS)

    Thomas, Brian C.; Jackman, Charles H.; Melott, Adrian L.; Laird, Claude M.; Stolarski, Richard S.; Gehrels, Neil; Cannizzo, John K.; Hogan, Daniel P.

    2005-01-01

    Based on cosmological rates, it is probable that at least once in the last Gy the Earth has been irradiated by a gamma-ray burst in our Galaxy from within 2 kpc. Using a two-dimensional atmospheric model we have computed the effects upon the Earth's atmosphere of one such burst. A ten second burst delivering 100 kJ/sq m to the Earth results in globally averaged ozone depletion of 35%, with depletion reaching 55% at some latitudes. Significant global depletion persists for over 5 years after the burst. This depletion would have dramatic implications for life since a 50% decrease in ozone column density results in approximately three times the normal UVB flux. Widespread extinctions are likely, based on extrapolation from UVB sensitivity of modern organisms.

  4. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  5. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  6. Physical mechanisms of timing jitter in photon detection by current-carrying superconducting nanowires

    NASA Astrophysics Data System (ADS)

    Sidorova, Mariia; Semenov, Alexej; Hübers, Heinz-Wilhelm; Charaev, Ilya; Kuzmin, Artem; Doerner, Steffen; Siegel, Michael

    2017-11-01

    We studied timing jitter in the appearance of photon counts in meandering nanowires with different fractional amount of bends. Intrinsic timing jitter, which is the probability density function of the random time delay between photon absorption in current-carrying superconducting nanowire and appearance of the normal domain, reveals two different underlying physical mechanisms. In the deterministic regime, which is realized at large photon energies and large currents, jitter is controlled by position-dependent detection threshold in straight parts of meanders. It decreases with the increase in the current. At small photon energies, jitter increases and its current dependence disappears. In this probabilistic regime jitter is controlled by Poisson process in that magnetic vortices jump randomly across the wire in areas adjacent to the bends.

  7. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  8. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  9. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  10. Buried landmine detection using multivariate normal clustering

    NASA Astrophysics Data System (ADS)

    Duston, Brian M.

    2001-10-01

    A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.

  11. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  12. A Skill Score of Trajectory Model Evaluation Using Reinitialized Series of Normalized Cumulative Lagrangian Separation

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Weisberg, R. H.

    2017-12-01

    The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.

  13. 49 CFR 173.50 - Class 1-Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... insensitive that there is very little probability of initiation or of transition from burning to detonation under normal conditions of transport. 1 The probability of transition from burning to detonation is... contain only extremely insensitive detonating substances and which demonstrate a negligible probability of...

  14. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  15. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  16. Expression of a partially deleted gene of human type II procollagen (COL2A1) in transgenic mice produces a chondrodysplasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandenberg, P.; Khillan, J.S.; Prockop, D.J.

    A minigene version of the human gene for type II procollagen (COL2AI) was prepared that lacked a large central region containing 12 of the 52 exons and therefore 291 of the 1523 codons of the gene. The construct was modeled after sporadic in-frame deletions of collagen genes that cause synthesis of shortened pro{alpha} chains that associate with normal pro{alpha} chains and thereby cause degradation of the shortened and normal pro{alpha} chains through a process called procollagen suicide. The gene construct was used to prepare five lines of transgenic mice expressing the minigene. A large proportion of the mice expressing themore » minigene developed a phenotype of a chondrodysplasia with dwarfism, short and thick limbs, a short snout, a cranial bulge, a cleft palate, and delayed mineralization of bone. A number of mice died shortly after birth. Microscopic examination of cartilage revealed decreased density and organization of collagen fibrils. In cultured chondrocytes from the transgenic mice, the minigene was expressed as shortened pro{alpha}1(II) chains that were disulfide-linked to normal mouse pro{alpha}1(II) chains. Therefore, the phenotype is probably explained by depletion of the endogenous mouse type II procollagen through the phenomenon of procollagen suicide.« less

  17. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  18. Antidiabetic, antihyperlipidemic, and antioxidant activities of Musa balbisiana Colla. in Type 1 diabetic rats.

    PubMed

    Borah, Mukundam; Das, Swarnamoni

    2017-01-01

    To evaluate the antidiabetic, antihyperlipidemic, and antioxidant activities of the ethanolic extracts of the flowers and inflorescence stalk of Musa balbisiana Colla. in streptozotocin (STZ)-induced Type 1 diabetic rats. Diabetes was induced in male Wistar albino rats (150-200 g) by single intraperitoneal injection of STZ (60 mg/kg b.w. i.p.). Albino rats ( n = 25) were divided into five groups, of which five animals each. Group A (normal control) and Group B (diabetic control) received normal saline (10 ml/kg/day p.o.), whereas Group C and Group D received 250 mg/kg/day p.o. of flower and inflorescence stalk ethanolic extracts, respectively, for 2 weeks. Group E (diabetic standard) received 6 U/kg/day s.c of Neutral Protamine Hagedorn insulin. Fasting blood sugar, serum insulin, catalase (CAT), malondialdehyde (MDA), and serum lipid profile were estimated at specific intervals of time. Effect of the extracts on intestinal glucose absorption was also evaluated to know the probable mechanism of action. Diabetic control exhibited significant increase in blood glucose, serum cholesterol, triglycerides, low-density lipoprotein, serum MDA levels and decreased serum CAT, and high-density lipoprotein levels which were significantly reverted by flower and inflorescence stalk ethanolic extracts after 2 weeks. Serum insulin levels were in increased ( P < 0.05), and intestinal glucose absorption decreased significantly ( P < 0.01) in extract-treated groups. Flower and inflorescence stalk of M. balbisiana Colla. possess significant antidiabetic, antihyperlipidemic, and antioxidant activities in STZ-induced Type 1 diabetic rats.

  19. Selective reduction in cortical bone mineral density in turner syndrome independent of ovarian hormone deficiency.

    PubMed

    Bakalov, Vladimir K; Axelrod, Lauren; Baron, Jeffrey; Hanton, Lori; Nelson, Lawrence M; Reynolds, James C; Hill, Suvimol; Troendle, James; Bondy, Carolyn A

    2003-12-01

    Women with Turner syndrome (TS) are at risk for osteoporosis from ovarian failure and possibly from haploinsufficiency for bone-related X-chromosome genes. To establish whether cortical or trabecular bone is predominantly affected, and to control for the ovarian failure, we studied forearm bone mineral density (BMD) in 41 women with TS ages 18-45 yr and in 35 age-matched women with karyotypically normal premature ovarian failure (POF). We measured BMD at the 1/3 distal radius (D-Rad(1/3); predominantly cortical bone) and at the ultradistal radius (UD-Rad; predominantly trabecular bone) by dual x-ray absorptiometry. Women with TS had lower cortical BMD compared with POF (D-Rad(1/3) Z-score = -1.5 +/- 0.8 for TS and 0.08 +/- 0.7 for POF; P < 0.0001). In contrast, the primarily trabecular UD-Rad BMD was normal in TS and not significantly different from POF (Z-score = -0.62 +/- 1.1 for TS and -0.34 +/- 1.0 for POF; P = 0.26). The difference in cortical BMD remained after adjustment for height, age of puberty, lifetime estrogen exposure, and serum 25-hydroxyvitamin D (P = 0.0013). Cortical BMD was independent of serum IGF-I and -II, PTH, and testosterone in TS. We conclude that there is a selective deficiency in forearm cortical bone in TS that appears independent of ovarian hormone exposure and is probably related to X-chromosome gene(s) haploinsufficiency.

  20. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  1. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  2. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke; Cai Jing; Brookeman, James

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less

  3. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  4. Low Bone Density

    MedlinePlus

    ... Bone Density Exam/Testing › Low Bone Density Low Bone Density Low bone density is when your bone ... to people with normal bone density. Detecting Low Bone Density A bone density test will determine whether ...

  5. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  6. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  8. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  9. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  10. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  11. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  12. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  13. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  14. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  16. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  17. Receiver function and gravity constraints on crustal structure and vertical movements of the Upper Mississippi Embayment and Ozark Uplift

    NASA Astrophysics Data System (ADS)

    Liu, Lin; Gao, Stephen S.; Liu, Kelly H.; Mickus, Kevin

    2017-06-01

    The Upper Mississippi Embayment (UME), where the seismically active New Madrid Seismic Zone resides, experienced two phases of subsidence commencing in the Late Precambrian and Cretaceous, respectively. To provide new constraints on models proposed for the mechanisms responsible for the subsidence, we computed and stacked P-to-S receiver functions recorded by 49 USArray and other seismic stations located in the UME and the adjacent Ozark Uplift and modeled Bouguer gravity anomaly data. The inferred thickness, density, and Vp/Vs of the upper and lower crustal layers suggest that the UME is characterized by a mafic and high-density upper crustal layer of ˜30 km thickness, which is underlain by a higher-density lower crustal layer of up to ˜15 km. Those measurements, in the background of previously published geological observations on the subsidence and uplift history of the UME, are in agreement with the model that the Cretaceous subsidence, which was suggested to be preceded by an approximately 2 km uplift, was the consequence of the passage of a previously proposed thermal plume. The thermoelastic effects of the plume would have induced wide-spread intrusion of mafic mantle material into the weak UME crust fractured by Precambrian rifting and increased its density, resulting in renewed subsidence after the thermal source was removed. In contrast, the Ozark Uplift has crustal density, thickness, and Vp/Vs measurements that are comparable to those observed on cratonic areas, suggesting an overall normal crust without significant modification by the proposed plume, probably owing to the relatively strong and thick lithosphere.

  18. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  19. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  20. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  1. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  2. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  3. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  4. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  5. Should patients with acute exacerbation of chronic bronchitis be treated with antibiotics? Advantages of the use of fluoroquinolones.

    PubMed

    Mensa, J; Trilla, A

    2006-05-01

    The pathological changes in chronic bronchitis (CB) produce airflow obstruction, reduce the effectiveness of the mucocilliary drainage system and lead to bacterial colonisation of bronchial secretion. The presence of bacteria induces an inflammatory response mediated by leukocytes. There is a direct relationship between the degree of impairment of the mucocilliary drainage system, the density of bacteria in mucus and the number of leukocytes in the sputum. Purulent sputum is a good marker of a high bacterial load. Eventually, if the number of leukocytes is high, their normal activity could decrease the effectiveness of the drainage system, increase the bronchial obstruction and probably damage the lung parenchyma. Whenever the density of bacteria in the bronchial lumen is >or=10(6) CFU/mL, there is a high probability that the degree of inflammatory response will lead to a vicious cycle which in turn tends to sustain the process. This situation can arise during the clinical course of any acute exacerbation of CB, independently of its aetiology, provided the episode is sufficiently severe and/or prolonged. Fluoroquinolones of the third and fourth generation are bactericidal against most microorganisms usually related to acute exacerbations of CB. Their diffusion to bronchial mucus is adequate. When used in short (5-day) treatment they reduce the bacterial load in a higher proportion than is achieved by beta-lactam or macrolide antibiotics given orally. Although the clinical cure rate is similar to that obtained with other antibiotics, the time between exacerbations could be increased.

  6. A new approach to the problem of bulk-mediated surface diffusion.

    PubMed

    Berezhkovskii, Alexander M; Dagdug, Leonardo; Bezrukov, Sergey M

    2015-08-28

    This paper is devoted to bulk-mediated surface diffusion of a particle which can diffuse both on a flat surface and in the bulk layer above the surface. It is assumed that the particle is on the surface initially (at t = 0) and at time t, while in between it may escape from the surface and come back any number of times. We propose a new approach to the problem, which reduces its solution to that of a two-state problem of the particle transitions between the surface and the bulk layer, focusing on the cumulative residence times spent by the particle in the two states. These times are random variables, the sum of which is equal to the total observation time t. The advantage of the proposed approach is that it allows for a simple exact analytical solution for the double Laplace transform of the conditional probability density of the cumulative residence time spent on the surface by the particle observed for time t. This solution is used to find the Laplace transform of the particle mean square displacement and to analyze the peculiarities of its time behavior over the entire range of time. We also establish a relation between the double Laplace transform of the conditional probability density and the Fourier-Laplace transform of the particle propagator over the surface. The proposed approach treats the cases of both finite and infinite bulk layer thicknesses (where bulk-mediated surface diffusion is normal and anomalous at asymptotically long times, respectively) on equal footing.

  7. Pharmacokinetics of differently designed immunoliposome formulations in rats with or without hepatic colon cancer metastases.

    PubMed

    Koning, G A; Morselt, H W; Gorter, A; Allen, T M; Zalipsky, S; Kamps, J A; Scherphof, G L

    2001-09-01

    Compare pharmacokinetics of tumor-directed immunoliposomes in healthy and tumor-bearing rats (hepatic colon cancer metastases). A tumor cell-specific monoclonal antibody was attached to polyethyleneglycol-stabilized liposomes, either in a random orientation via a lipid anchor (MPB-PEG-liposomes) or uniformly oriented at the distal end of the PEG chains (Hz-PEG-liposomes). Pharmacokinetics and tissue distribution were determined using [3H]cholesteryloleylether or bilayer-anchored 5-fluoro[3H]deoxyuridine-dipalmitate ([3H]FUdR-dP) as a marker. In healthy animals clearance of PEG-(immuno)liposomes was almost log-linear and only slightly affected by antibody attachment; in tumor-bearing animals all liposomes displayed biphasic clearance. In normal and tumor animals blood elimination increased with increasing antibody density; particularly for the Hz-PEG-liposomes, and was accompanied by increased hepatic uptake, probably due to increased numbers of macrophages induced by tumor growth. The presence of antibodies on the liposomes enhanced tumor accumulation: uptake per gram tumor tissue (2-4% of dose) was similar to that of liver. Remarkably, this applied to tumor-specific and irrelevant antibody. Increased immunoliposome uptake by trypsin-treated Kupffer cells implicated involvement of high-affinity Fc-receptors on activated macrophages. Tumor growth and immunoliposome characteristics (antibody density and orientation) determine immunoliposome pharmacokinetics. Although with a long-circulating immunoliposome formulation, efficiently retaining the prodrug FUdR-dP, we achieved enhanced uptake by hepatic metastases, this was probably not mediated by specific interaction with the tumor cells, but rather by tumor-associated macrophages.

  8. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  9. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  10. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  11. Measurements of small-scale statistics and probability density functions in passively heated shear flow

    NASA Astrophysics Data System (ADS)

    Ferchichi, Mohsen

    This study is an experimental investigation consisting of two parts. In the first part, the fine structure of uniformly sheared turbulence was investigated within the framework of Kolmogorov's (1941) similarity hypotheses. The second part, consisted of the study of the scalar mixing in uniformly sheared turbulence with an imposed mean scalar gradient, with the emphasis on measurements relevant to the probability density function formulation and on scalar derivative statistics. The velocity fine structure was invoked from statistics of the streamwise and transverse derivatives of the streamwise velocity as well as velocity differences and structure functions, measured with hot wire anemometry for turbulence Reynolds numbers, Relambda, in the range between 140 and 660. The streamwise derivative skewness and flatness agreed with previously reported results in that they increased with increasing Relambda with the flatness increasing at a higher rate. The skewness of the transverse derivative decreased with increasing Relambda, and the flatness of this derivative increased with Relambda but a lower rate than the streamwise derivative flatness. The high order (up to sixth) transverse structure functions of the streamwise velocity showed the same trends as the corresponding streamwise structure functions. In the second pan of tins experimental study, an army of heated ribbons was introduced into the flow to produce a constant mean temperature gradient, such that the temperature acted as a passive scalar. The Re lambda in this study varied from 184 to 253. Cold wire thermometry and hot wire anemometry were used for simultaneous measurements of temperature and velocity. The scalar pdf was found to be nearly Gaussian. Various tests of joint statistics of the scalar and its rate of destruction revealed that the scalar dissipation rate was essentially independent of the scalar value. The measured joint statistics of the scalar and the velocity suggested that they were nearly jointly normal and that the normalized conditioned expectations varied linearly with the scalar with slopes corresponding to the scalar-velocity correlation coefficients. Finally, the measured streamwise and transverse scalar derivatives and differences revealed that the scalar fine structure was intermittent not only in the dissipative range, but in the inertial range as well.

  12. The effects of retinal abnormalities on the multifocal visual evoked potential.

    PubMed

    Chen, John Y; Hood, Donald C; Odel, Jeffrey G; Behrens, Myles M

    2006-10-01

    To examine the effects on the amplitude and latency of the multifocal visual evoked potential (mfVEP) in retinal diseases associated with depressed multifocal electroretinograms (mfERG). Static automated perimetry (SAP), mfERGs, and mfVEPs were obtained from 15 individuals seen by neuro-ophthalmologists and diagnosed with retinal disease based on funduscopic examination, visual field, and mfERG. Optic neuropathy was ruled out in all cases. Diagnoses included autoimmune retinopathy (n = 3), branch retinal arterial occlusion (n = 3), branch retinal vein occlusion (n = 1), vitamin A deficiency (n = 1), digoxin/age-related macular degeneration (n = 1), multiple evanescent white dot syndrome (n = 1), and nonspecific retinal disease (n = 5). Patients were selected from a larger group based on abnormal mfERG amplitudes covering a diameter of 20 degrees or greater. Fourteen (93%) of 15 patients showed significant mfVEP delays, as determined by either mean latency or the probability of a cluster of delayed local responses. Thirteen of 15 patients had normal mfVEP amplitudes in regions corresponding to markedly reduced or nonrecordable mfERG responses. These findings can be mimicked in normal individuals by viewing the display through a neutral-density filter. Retinal diseases can result in mfVEPs of relatively normal amplitudes, often with delays, in regions showing decreased mfERG responses and visual field sensitivity loss. Consequently, a retinal problem can be missed, or dismissed as functional, if a diagnosis is based on an mfVEP of normal or near-normal amplitude. Further, in patients with marked mfVEP delays, a retinal problem could be confused with optic neuritis, especially in a patient with a normal appearing fundus.

  13. Plant calendar pattern based on rainfall forecast and the probability of its success in Deli Serdang regency of Indonesia

    NASA Astrophysics Data System (ADS)

    Darnius, O.; Sitorus, S.

    2018-03-01

    The objective of this study was to determine the pattern of plant calendar of three types of crops; namely, palawija, rice, andbanana, based on rainfall in Deli Serdang Regency. In the first stage, we forecasted rainfall by using time series analysis, and obtained appropriate model of ARIMA (1,0,0) (1,1,1)12. Based on the forecast result, we designed a plant calendar pattern for the three types of plant. Furthermore, the probability of success in the plant types following the plant calendar pattern was calculated by using the Markov process by discretizing the continuous rainfall data into three categories; namely, Below Normal (BN), Normal (N), and Above Normal (AN) to form the probability transition matrix. Finally, the combination of rainfall forecasting models and the Markov process were used to determine the pattern of cropping calendars and the probability of success in the three crops. This research used rainfall data of Deli Serdang Regency taken from the office of BMKG (Meteorologist Climatology and Geophysics Agency), Sampali Medan, Indonesia.

  14. Dense cloud cores revealed by CO in the low metallicity dwarf galaxy WLM.

    PubMed

    Rubio, Monica; Elmegreen, Bruce G; Hunter, Deidre A; Brinks, Elias; Cortés, Juan R; Cigan, Phil

    2015-09-10

    Understanding stellar birth requires observations of the clouds in which they form. These clouds are dense and self-gravitating, and in all existing observations they are molecular, with H2 the dominant species and carbon monoxide (CO) the best available tracer. When the abundances of carbon and oxygen are low compared with that of hydrogen, and the opacity from dust is also low, as in primeval galaxies and local dwarf irregular galaxies, CO forms slowly and is easily destroyed, so it is difficult for it to accumulate inside dense clouds. Here we report interferometric observations of CO clouds in the local group dwarf irregular galaxy Wolf-Lundmark-Melotte (WLM), which has a metallicity that is 13 per cent of the solar value and 50 per cent lower than the previous CO detection threshold. The clouds are tiny compared to the surrounding atomic and H2 envelopes, but they have typical densities and column densities for CO clouds in the Milky Way. The normal CO density explains why star clusters forming in dwarf irregulars have similar densities to star clusters in giant spiral galaxies. The low cloud masses suggest that these clusters will also be low mass, unless some galaxy-scale compression occurs, such as an impact from a cosmic cloud or other galaxy. If the massive metal-poor globular clusters in the halo of the Milky Way formed in dwarf galaxies, as is commonly believed, then they were probably triggered by such an impact.

  15. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  16. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  17. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  18. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  19. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    PubMed

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  20. Large scale IRAM 30 m CO-observations in the giant molecular cloud complex W43

    NASA Astrophysics Data System (ADS)

    Carlhoff, P.; Nguyen Luong, Q.; Schilke, P.; Motte, F.; Schneider, N.; Beuther, H.; Bontemps, S.; Heitsch, F.; Hill, T.; Kramer, C.; Ossenkopf, V.; Schuller, F.; Simon, R.; Wyrowski, F.

    2013-12-01

    We aim to fully describe the distribution and location of dense molecular clouds in the giant molecular cloud complex W43. It was previously identified as one of the most massive star-forming regions in our Galaxy. To trace the moderately dense molecular clouds in the W43 region, we initiated W43-HERO, a large program using the IRAM 30 m telescope, which covers a wide dynamic range of scales from 0.3 to 140 pc. We obtained on-the-fly-maps in 13CO (2-1) and C18O (2-1) with a high spectral resolution of 0.1 km s-1 and a spatial resolution of 12''. These maps cover an area of ~1.5 square degrees and include the two main clouds of W43 and the lower density gas surrounding them. A comparison to Galactic models and previous distance calculations confirms the location of W43 near the tangential point of the Scutum arm at approximately 6 kpc from the Sun. The resulting intensity cubes of the observed region are separated into subcubes, which are centered on single clouds and then analyzed in detail. The optical depth, excitation temperature, and H2 column density maps are derived out of the 13CO and C18O data. These results are then compared to those derived from Herschel dust maps. The mass of a typical cloud is several 104 M⊙ while the total mass in the dense molecular gas (>102 cm-3) in W43 is found to be ~1.9 × 106 M⊙. Probability distribution functions obtained from column density maps derived from molecular line data and Herschel imaging show a log-normal distribution for low column densities and a power-law tail for high densities. A flatter slope for the molecular line data probability distribution function may imply that those selectively show the gravitationally collapsing gas. Appendices are available in electronic form at http://www.aanda.orgThe final datacubes (13CO and C18O) for the entire survey are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A24

  1. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  2. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  3. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  4. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  5. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  6. Probable autosomal recessive Marfan syndrome.

    PubMed Central

    Fried, K; Krakowsky, D

    1977-01-01

    A probable autosomal recessive mode of inheritance is described in a family with two affected sisters. The sisters showed the typical picture of Marfan syndrome and were of normal intelligence. Both parents and all four grandparents were personally examined and found to be normal. Homocystinuria was ruled out on repeated examinations. This family suggests genetic heterogeneity in Marfan syndrome and that in some rare families the mode of inheritance may be autosomal recessive. Images PMID:592353

  7. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Instrumented roll technology for the design space development of roller compaction process.

    PubMed

    Nesarikar, Vishwas V; Vatsaraj, Nipa; Patel, Chandrakant; Early, William; Pandey, Preetanshu; Sprockel, Omar; Gao, Zhihui; Jerzewski, Robert; Miller, Ronald; Levin, Michael

    2012-04-15

    Instrumented roll technology on Alexanderwerk WP120 roller compactor was developed and utilized successfully for the measurement of normal stress on ribbon during the process. The effects of process parameters such as roll speed (4-12 rpm), feed screw speed (19-53 rpm), and hydraulic roll pressure (40-70 bar) on normal stress and ribbon density were studied using placebo and active pre-blends. The placebo blend consisted of 1:1 ratio of microcrystalline cellulose PH102 and anhydrous lactose with sodium croscarmellose, colloidal silicon dioxide, and magnesium stearate. The active pre-blends were prepared using various combinations of one active ingredient (3-17%, w/w) and lubricant (0.1-0.9%, w/w) levels with remaining excipients same as placebo. Three force transducers (load cells) were installed linearly along the width of the roll, equidistant from each other with one transducer located in the center. Normal stress values recorded by side sensors and were lower than normal stress values recorded by middle sensor and showed greater variability than middle sensor. Normal stress was found to be directly proportional to hydraulic pressure and inversely to screw to roll speed ratio. For active pre-blends, normal stress was also a function of compressibility. For placebo pre-blends, ribbon density increased as normal stress increased. For active pre-blends, in addition to normal stress, ribbon density was also a function of gap. Models developed using placebo were found to predict ribbon densities of active blends with good accuracy and the prediction error decreased as the drug concentration of active blend decreased. Effective angle of internal friction and compressibility properties of active pre blend may be used as key indicators for predicting ribbon densities of active blend using placebo ribbon density model. Feasibility of on-line prediction of ribbon density during roller compaction was demonstrated using porosity-pressure data of pre-blend and normal stress measurements. Effect of vacuum to de-aerate pre blend prior to entering the nip zone was studied. Varying levels of vacuum for de-aeration of placebo pre blend did not affect the normal stress values. However, turning off vacuum completely caused an increase in normal stress with subsequent decrease in gap. Use of instrumented roll demonstrated potential to reduce the number of DOE runs by enhancing fundamental understanding of relationship between normal stress on ribbon and process parameters. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  10. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  11. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  12. Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation

    USGS Publications Warehouse

    Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.

    1998-01-01

    We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.

  13. Why the chameleon has spiral-shaped muscle fibres in its tongue

    PubMed Central

    Leeuwen, J. L. van

    1997-01-01

    The intralingual accelerator muscle is the primary actuator for the remarkable ballistic tongue projection of the chameleon. At rest, this muscle envelopes the elongated entoglossal process, a cylindrically shaped bone with a tapering distal end. During tongue projection, the accelerator muscle elongates and slides forward along the entoglossal process until the entire muscle extends beyond the distal end of the process. The accelerator muscle fibres are arranged in transverse planes (small deviations are possible), and form (hitherto unexplained) spiral-shaped arcs from the peripheral to the internal boundary. To initiate tongue projection, the muscle fibres probably generate a high intramuscular pressure. The resulting negative pressure gradient (from base to tip) causes the muscle to elongate and to accelerate forward. Effective forward sliding is made possible by a lubricant and a relatively low normal stress exerted on the proximal cylindrical part of the entoglossal process. A relatively high normal stress is, however, probably required for an effective acceleration of muscle tissue over the tapered end of the process. For optimal performance, the fast extension movement should occur without significant (energy absorbing) torsional motion of the tongue. In addition, the tongue extension movement is aided by a close packing of the muscles fibres (required for a high power density) and a uniform strain and work output in every cross-section of the muscle. A quantitative model of the accelerator muscle was developed that predicts internal muscle fibre arrangements based on the functional requirements above and the physical principle of mechanical stability. The curved shapes and orientations of the muscle fibres typically found in the accelerator muscle were accurately predicted by the model. Furthermore, the model predicts that the reduction of the entoglossal radius towards the tip (and thus the internal radius of the muscle) tends to increase the normal stress on the entoglossal bone.

  14. The onset of stress response in rainbow trout Oncorhynchus mykiss embryos subjected to density and handling.

    PubMed

    Ghaedi, Gholamreza; Falahatkar, Bahram; Yavari, Vahid; Sheibani, Mohammad T; Broujeni, Gholamreza Nikbakht

    2015-04-01

    The present study made an attempt to measure the cortisol content, as an indicator of stress response, in rainbow trout embryos which were exposed to different densities and handling stress (air exposure) during incubation. The three densities of experimental embryos at early development stages were considered as 2.55 embryos/cm(2) (low density), 5.10 embryos/cm(2) (normal density) and 7.65 embryos/cm(2) (high density). The cortisol content of eggs (5.09 ± 0.12 ng/g) decreased to 3.68 ± 0.14 ng/g in newly fertilized eggs. Resting level of cortisol dropped at three densities by day 18 of post fertilization. Then, cortisol increased at hatching stage to 1.16 ± 0.11, 1.20 ± 0.12 and 1.21 ± 0.14 ng/g at low, normal and high densities, respectively. There were no statistically significant differences between cortisol concentrations in three densities. The acute handling stress test (5-min out-of-water), conducted on embryos (48 h post fertilization, organogenesis and eyed stage) in three densities, revealed no differences in whole-body cortisol levels between stressed and unstressed experimental groups. At hatching stage in low-density group, level of cortisol increased but the difference with the pre-stress levels was not statistically significant. Furthermore, significant differences in cortisol levels of stressed and unstressed embryos were detected on hatching in normal and high density groups [1.20 ± 0.12 at time 0-1.49 ± 0.11 ng/g at 1 hps (hours post stress) and from 1.21 ± 0.14 at time 0 to 1.53 ± 0.10 ng/g at 3 hps, respectively]. The results showed no difference in profile of cortisol in different densities, but acute stress conducted on embryos, incubated in different densities, revealed differences in cortisol stress response at hatching between normal and high density, which lead to cortisol increase at hatching time. It indicates that the lag time in the cortisol response to stressors immediately after hatching does not occur when the siblings were stressed during the embryo stage. Results, finally, indicated that hypothalamus-pituitary-interrenal axis was active and responded to an acute stressor under normal and high density, but it is unresponsive to a stressor around hatching under low density.

  15. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  17. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  18. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  19. MaxEnt alternatives to pearson family distributions

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie J.

    2012-05-01

    In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.

  20. Attenuated associations between increasing BMI and unfavorable lipid profiles in Chinese Buddhist vegetarians.

    PubMed

    Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun

    2013-01-01

    Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.

  1. Bone mineral density level by dual energy X-ray absorptiometry in rheumatoid arthritis.

    PubMed

    Makhdoom, Asadullah; Rahopoto, Muhammad Qasim; Awan, Shazia; Tahir, Syed Muhammad; Memon, Shazia; Siddiqui, Khaleeque Ahmed

    2017-01-01

    To observe the level of bone mineral density by Dual Energy X-ray Absorptiometry in rheumatoid arthritis patients. The observational study was conducted at Liaquat University of Medical and Health Sciences, Jamshoro, Pakistan, from January 2011 to December 2014. Bone mineral density was measured from the femoral neck, ward's triangle and lumbar spine, in patients 25-55 years of age, who were diagnosed with rheumatoid arthritis. All the cases were assessed for bone mineral density from appendicular as well as axial skeleton. Data was collected through a designed proforma and analysis was performed using SPSS 21. Of the 229 rheumatoid arthritis patients, 33(14.4%) were males. Five (15.1%) males had normal bone density, 14(42.4%) had osteopenia and 14(42.4%) had osteoporosis. Of the 196(85.5%) females, 45(29.9%) had normal bone density, 72 (37.7%) had osteopenia and 79(40.30%) had osteoporosis. Of the 123(53.7%) patients aged 30-50 years, 38(30.9%) had normal bone density, 59(48.0%) had osteopenia, and 26(21.1%) had osteoporosis. Of the 106(46.3%) patients over 50 years, 12(11.3%) had normal bone density, 27 (25.5%) had osteopenia and 67(63.2%) had osteoporosis. Osteoporosis and osteopenia were most common among rheumatoid arthritis patients. Assessment of bone mineral density by Dual Energy X-ray Absorptiometry can lead to quick relief in the clinical symptoms with timely therapy.

  2. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  3. A partial differential equation for pseudocontact shift.

    PubMed

    Charnock, G T P; Kuprov, Ilya

    2014-10-07

    It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.

  4. Does osteoporosis reduce the primary tilting stability of cementless acetabular cups?

    PubMed

    von Schulze Pellengahr, Christoph; von Engelhardt, Lars V; Wegener, Bernd; Müller, Peter E; Fottner, Andreas; Weber, Patrick; Ackermann, Ole; Lahner, Matthias; Teske, Wolfram

    2015-04-21

    Cementless hip cups need sufficient primary tilting stability to achieve osseointegration. The aim of the study was to assess differences of the primary implant stability in osteoporotic bone and in bone with normal bone density. To assess the influence of different cup designs, two types of threaded and two types of press-fit cups were tested. The maximum tilting moment for two different cementless threaded cups and two different cementless press-fit cups was determined in macerated human hip acetabuli with reduced (n=20) and normal bone density (n=20), determined using Q-CT. The tilting moments for each cup were determined five times in the group with reduced bone density and five times in the group with normal bone density, and the respective average values were calculated. The mean maximum extrusion force of the threaded cup Zintra was 5670.5 N (max. tilting moment 141.8 Nm) in bone with normal density and.5748.3 N (max. tilting moment 143.7 Nm) in osteoporotic bone. For the Hofer Imhof (HI) threaded cup it was 7681.5 N (192.0 Nm) in bone with normal density and 6828.9 N (max. tilting moment 170.7 Nm) in the group with osteoporotic bone. The mean maximum extrusion force of the macro-textured press-fit cup Metallsockel CL was 3824.6 N (max. tilting moment 95.6 Nm) in bone with normal and 2246.2 N (max. tilting moment 56.2 Nm) in osteoporotic bone. For the Monoblock it was 1303.8 N (max. tilting moment 32.6 Nm) in normal and 1317 N (max. tilting moment 32.9 Nm) in osteoporotic bone. There was no significance. A reduction of the maximum tilting moment in osteoporotic bone of the ESKA press-fit cup Metallsockel CL was noticed. Results on macerated bone specimens showed no statistically significant reduction of the maximum tilting moment in specimens with osteoporotic bone density compared to normal bone, neither for threaded nor for the press-fit cups. With the limitation that the results were obtained using macerated bone, we could not detect any restrictions for the clinical indication of the examined cementless cups in osteoporotic bone.

  5. IDENTIFYING PROBABLE DIABETES MELLITUS AMONG HISPANICS/LATINOS FROM FOUR U.S. CITIES: FINDINGS FROM THE HISPANIC COMMUNITY HEALTH STUDY/STUDY OF LATINOS.

    PubMed

    Avilés-Santa, M Larissa; Schneiderman, Neil; Savage, Peter J; Kaplan, Robert C; Teng, Yanping; Pérez, Cynthia M; Suárez, Erick L; Cai, Jianwen; Giachello, Aida L; Talavera, Gregory A; Cowie, Catherine C

    2016-10-01

    The aim of this study was to compare the ability of American Diabetes Association (ADA) diagnostic criteria to identify U.S. Hispanics/Latinos from diverse heritage groups with probable diabetes mellitus and assess cardiovascular risk factor correlates of those criteria. Cross-sectional analysis of data from 15,507 adults from 6 Hispanic/Latino heritage groups, enrolled in the Hispanic Community Health Study/Study of Latinos. The prevalence of probable diabetes mellitus was estimated using individual or combinations of ADA-defined cut points. The sensitivity and specificity of these criteria at identifying diabetes mellitus from ADA-defined prediabetes and normoglycemia were evaluated. Prevalence ratios of hypertension, abnormal lipids, and elevated urinary albumin-creatinine ratio for unrecognized diabetes mellitus-versus prediabetes and normoglycemia-were calculated. Among Hispanics/Latinos (mean age, 43 years) with diabetes mellitus, 39.4% met laboratory test criteria for probable diabetes, and the prevalence varied by heritage group. Using the oral glucose tolerance test as the gold standard, the sensitivity of fasting plasma glucose (FPG) and hemoglobin A1c-alone or in combination-was low (18, 23, and 33%, respectively) at identifying probable diabetes mellitus. Individuals who met any criterion for probable diabetes mellitus had significantly higher (P<.05) prevalence of most cardiovascular risk factors than those with normoglycemia or prediabetes, and this association was not modified by Hispanic/Latino heritage group. FPG and hemoglobin A1c are not sensitive (but are highly specific) at detecting probable diabetes mellitus among Hispanics/Latinos, independent of heritage group. Assessing cardiovascular risk factors at diagnosis might prompt multitarget interventions and reduce health complications in this young population. 2hPG = 2-hour post-glucose load plasma glucose ADA = American Diabetes Association BMI = body mass index CV = cardiovascular FPG = fasting plasma glucose HbA1c = hemoglobin A1c HCHS/SOL = Hispanic Community Health Study/Study of Latinos HDL-C = high-density-lipoprotein cholesterol NGT = normal glucose tolerance NHANES = National Health and Nutrition Examination Survey OGTT = oral glucose tolerance test TG = triglyceride UACR = urine albumin-creatinine ratio.

  6. Applications of conformal field theory to problems in 2D percolation

    NASA Astrophysics Data System (ADS)

    Simmons, Jacob Joseph Harris

    This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.

  7. Application of the FINDER system to the search for epithermal vein gold-silver deposits : Kushikino, Japan, a case study

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    1991-01-01

    The FINDER system employs geometric probability, Bayesian statistics, and the normal probability density function to integrate spatial and frequency information to produce a map of probabilities of target centers. Target centers can be mineral deposits, alteration associated with mineral deposits, or any other target that can be represented by a regular shape on a two dimensional map. The size, shape, mean, and standard deviation for each variable are characterized in a control area and the results applied by means of FINDER to the study area. The Kushikino deposit consists of groups of quartz-calcite-adularia veins that produced 55 tonnes of gold and 456 tonnes of silver since 1660. Part of a 6 by 10 km area near Kushikino served as a control area. Within the control area, data plotting, contouring, and cluster analysis were used to identify the barren and mineralized populations. Sodium was found to be depleted in an elliptically shaped area 3.1 by 1.6 km, potassium was both depleted and enriched locally in an elliptically shaped area 3.0 by 1.3 km, and sulfur was enriched in an elliptically shaped area 5.8 by 1.6 km. The potassium, sodium, and sulfur content from 233 surface rock samples were each used in FINDER to produce probability maps for the 12 by 30 km study area which includes Kushikino. High probability areas for each of the individual variables are over and offset up to 4 km eastward from the main Kushikino veins. In general, high probability areas identified by FINDER are displaced from the main veins and cover not only the host andesite and the dacite-andesite that is about the same age as the Kushikino mineralization, but also younger sedimentary rocks, andesite, and tuff units east and northeast of Kushikino. The maps also display the same patterns observed near Kushikino, but with somewhat lower probabilities, about 1.5 km east of the old gold prospect, Hajima, and in a broad zone 2.5 km east-west and 1 km north-south, centered 2 km west of the old gold prospect, Yaeyama.

  8. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  9. Replication of Low Density Electroformed Normal Incidence Optics

    NASA Technical Reports Server (NTRS)

    Ritter, Joseph M.

    2000-01-01

    Replicated electroformed light-weight nickel alloy mirrors can have high strength, low areal density (<3kg/m2), smooth finish, and controllable alloy composition. Progress at NASA MSFC SOMTC in developing normal incidence replicated Nickel mirrors will be reported.

  10. The role of demographic compensation theory in incidental take assessments for endangered species

    USGS Publications Warehouse

    McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts

    2011-01-01

    Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.

  11. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  12. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  13. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  14. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  15. Information Density and Syntactic Repetition.

    PubMed

    Temperley, David; Gildea, Daniel

    2015-11-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastner, S.O.; Bhatia, A.K.

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less

  17. Derivation of the expressions for γ50 and D50 for different individual TCP and NTCP models

    NASA Astrophysics Data System (ADS)

    Stavreva, N.; Stavrev, P.; Warkentin, B.; Fallone, B. G.

    2002-10-01

    This paper presents a complete set of formulae for the position (D50) and the normalized slope (γ50) of the dose-response relationship based on the most commonly used radiobiological models for tumours as well as for normal tissues. The functional subunit response models (critical element and critical volume) are used in the derivation of the formulae for the normal tissue. Binomial statistics are used to describe the tumour control probability, the functional subunit response as well as the normal tissue complication probability. The formulae are derived for the single hit and linear quadratic models of cell kill in terms of the number of fractions and dose per fraction. It is shown that the functional subunit models predict very steep, almost step-like, normal tissue individual dose-response relationships. Furthermore, the formulae for the normalized gradient depend on the cellular parameters α and β when written in terms of number of fractions, but not when written in terms of dose per fraction.

  18. Molecular dynamics simulations of n-hexane at 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisal, Martin; Department of Physics, Faculty of Science, J. E. Purkinje University, 400 96 Usti n. Lab.; Izak, Pavel

    Molecular dynamics simulations of n-hexane adsorbed onto the interface of 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide ([bmim][Tf{sub 2}N]) are performed at three n-hexane surface densities, ranged from 0.7 to 2.3 {mu}mol/m{sup 2} at 300 K. For [bmim][Tf{sub 2}N] room-temperature ionic liquid, we use a non-polarizable all-atom force field with the partial atomic charges based on ab initio calculations for the isolated ion pair. The net charges of the ions are {+-}0.89e, which mimics the anion to cation charge transfer and polarization effects. The OPLS-AA force field is employed for modeling of n-hexane. The surface tension is computed using the mechanical route and itsmore » value decreases with increase of the n-hexane surface density. The [bmim][Tf{sub 2}N]/n-hexane interface is analyzed using the intrinsic method, and the structural and dynamic properties of the interfacial, sub-interfacial, and central layers are computed. We determine the surface roughness, global and intrinsic density profiles, and orientation ordering of the molecules to describe the structure of the interface. We further compute the survival probability, normal and lateral self-diffusion coefficients, and re-orientation correlation functions to elucidate the effects of n-hexane on dynamics of the cations and anions in the layers.« less

  19. Molecular dynamics simulations of n-hexane at 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide interface.

    PubMed

    Lísal, Martin; Izák, Pavel

    2013-07-07

    Molecular dynamics simulations of n-hexane adsorbed onto the interface of 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide ([bmim][Tf2N]) are performed at three n-hexane surface densities, ranged from 0.7 to 2.3 μmol/m(2) at 300 K. For [bmim][Tf2N] room-temperature ionic liquid, we use a non-polarizable all-atom force field with the partial atomic charges based on ab initio calculations for the isolated ion pair. The net charges of the ions are ±0.89e, which mimics the anion to cation charge transfer and polarization effects. The OPLS-AA force field is employed for modeling of n-hexane. The surface tension is computed using the mechanical route and its value decreases with increase of the n-hexane surface density. The [bmim][Tf2N]/n-hexane interface is analyzed using the intrinsic method, and the structural and dynamic properties of the interfacial, sub-interfacial, and central layers are computed. We determine the surface roughness, global and intrinsic density profiles, and orientation ordering of the molecules to describe the structure of the interface. We further compute the survival probability, normal and lateral self-diffusion coefficients, and re-orientation correlation functions to elucidate the effects of n-hexane on dynamics of the cations and anions in the layers.

  20. Universal characteristics of fractal fluctuations in prime number distribution

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2014-11-01

    The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.

  1. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  2. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-29

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In that case, it is particularly important to be explicit about the performance required of the survey. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine 1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, 2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and 3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  3. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-28

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In such cases, it is particularly important to be explicit about the performance required of the surveys. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine (1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, (2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and (3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  4. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  5. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  6. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  7. Theoretical Calculation of the Electron Transport Parameters and Energy Distribution Function for CF3I with noble gases mixtures using Monte Carlo simulation program

    NASA Astrophysics Data System (ADS)

    Jawad, Enas A.

    2018-05-01

    In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.

  8. Avoidance of voiding cystourethrography in infants younger than 3 months with Escherichia coli urinary tract infection and normal renal ultrasound.

    PubMed

    Pauchard, Jean-Yves; Chehade, Hassib; Kies, Chafika Zohra; Girardin, Eric; Cachat, Francois; Gehri, Mario

    2017-09-01

    Urinary tract infection (UTI) represents the most common bacterial infection in infants, and its prevalence increases with the presence of high-grade vesicoureteral reflux (VUR). However, voiding cystourethrography (VCUG) is invasive, and its indication in infants <3 months is not yet defined. This study aims to investigate, in infants aged 0-3 months, if the presence of Escherichia coli versus non- E. coli bacteria and/or normal or abnormal renal ultrasound (US) could avoid the use of VCUG. One hundred and twenty-two infants with a first febrile UTI were enrolled. High-grade VUR was defined by the presence of VUR grade ≥III. The presence of high-grade VUR was recorded using VCUG, and correlated with the presence of E. coli /non- E. coli UTI and with the presence of normal/abnormal renal US. The Bayes theorem was used to calculate pretest and post-test probability. The probability of high-grade VUR was 3% in the presence of urinary E. coli infection. Adding a normal renal US finding decreased this probability to 1%. However, in the presence of non- E. coli bacteria, the probability of high-grade VUR was 26%, and adding an abnormal US finding increased further this probability to 55%. In infants aged 0-3 months with a first febrile UTI, the presence of E. coli and normal renal US findings allow to safely avoid VCUG. Performing VCUG only in infants with UTI secondary to non- E. coli bacteria and/or abnormal US would save many unnecessary invasive procedures, limit radiation exposure, with a very low risk (<1%) of missing a high-grade VUR. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Correlation of outer nuclear layer thickness with cone density values in patients with retinitis pigmentosa and healthy subjects.

    PubMed

    Menghini, Moreno; Lujan, Brandon J; Zayit-Soudry, Shiri; Syed, Reema; Porco, Travis C; Bayabo, Kristine; Carroll, Joseph; Roorda, Austin; Duncan, Jacque L

    2014-12-16

    We studied the correlation between outer nuclear layer (ONL) thickness and cone density in normal eyes and eyes with retinitis pigmentosa (RP). Spectral-domain optical coherence tomography (SD-OCT) scans were acquired using a displaced pupil entry position of the scanning beam to distinguish Henle's fiber layer from the ONL in 20 normal eyes (10 subjects) and 12 eyes with RP (7 patients). Cone photoreceptors were imaged using adaptive optics scanning laser ophthalmoscopy. The ONL thickness and cone density were measured at 0.5° intervals along the horizontal meridian through the fovea nasally and temporally. The ONL thickness and cone density were correlated using Spearman's rank correlation coefficient r. Cone densities averaged over the central 6° were lower in eyes with RP than normal, but showed high variability in both groups. The ONL thickness and cone density were significantly correlated when all retinal eccentricities were combined (r = 0.74); the correlation for regions within 0.5° to 1.5° eccentricity was stronger (r = 0.67) than between 1.5° and 3.0° eccentricity (r = 0.23). Although cone densities were lower between 0.5° and 1.5° in eyes with RP, ONL thickness measures at identical retinal locations were similar in the two groups (P = 0.31), and interindividual variation was high for ONL and cone density measures. Although ONL thickness and retinal eccentricity were important predictors of cone density, eccentricity was over 3 times more important. The ONL thickness and cone density were correlated in normal eyes and eyes with RP, but both were strongly correlated with retinal eccentricity, precluding estimation of cone density from ONL thickness. (ClinicalTrials.gov number, NCT00254605.). Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  10. Correlation of Outer Nuclear Layer Thickness With Cone Density Values in Patients With Retinitis Pigmentosa and Healthy Subjects

    PubMed Central

    Menghini, Moreno; Lujan, Brandon J.; Zayit-Soudry, Shiri; Syed, Reema; Porco, Travis C.; Bayabo, Kristine; Carroll, Joseph; Roorda, Austin; Duncan, Jacque L.

    2015-01-01

    Purpose. We studied the correlation between outer nuclear layer (ONL) thickness and cone density in normal eyes and eyes with retinitis pigmentosa (RP). Methods. Spectral-domain optical coherence tomography (SD-OCT) scans were acquired using a displaced pupil entry position of the scanning beam to distinguish Henle's fiber layer from the ONL in 20 normal eyes (10 subjects) and 12 eyes with RP (7 patients). Cone photoreceptors were imaged using adaptive optics scanning laser ophthalmoscopy. The ONL thickness and cone density were measured at 0.5° intervals along the horizontal meridian through the fovea nasally and temporally. The ONL thickness and cone density were correlated using Spearman's rank correlation coefficient r. Results. Cone densities averaged over the central 6° were lower in eyes with RP than normal, but showed high variability in both groups. The ONL thickness and cone density were significantly correlated when all retinal eccentricities were combined (r = 0.74); the correlation for regions within 0.5° to 1.5° eccentricity was stronger (r = 0.67) than between 1.5° and 3.0° eccentricity (r = 0.23). Although cone densities were lower between 0.5° and 1.5° in eyes with RP, ONL thickness measures at identical retinal locations were similar in the two groups (P = 0.31), and interindividual variation was high for ONL and cone density measures. Although ONL thickness and retinal eccentricity were important predictors of cone density, eccentricity was over 3 times more important. Conclusions. The ONL thickness and cone density were correlated in normal eyes and eyes with RP, but both were strongly correlated with retinal eccentricity, precluding estimation of cone density from ONL thickness. (ClinicalTrials.gov number, NCT00254605.) PMID:25515570

  11. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  12. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  13. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  14. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  15. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  16. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  17. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  18. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  19. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  20. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  1. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  2. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  3. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less

  4. Does Quantitative Tibial Ultrasound Predict Low Bone Mineral Density Defined by Dual Energy X-Ray Absorptiometry?

    PubMed Central

    Birtane, Murat; Ekuklu, Galip; Cermik, Fikret; Tuna, Filiz; Kokino, Siranus

    2008-01-01

    Purpose Efforts for the early detection of bone loss and subsequent fracture risk by quantitative ultrasound (QUS), which is a non-invasive, radiation free, and cheaper method, seem rational to reduce the management costs. We aimed in this study to assess the probable correlation of speed of sound (SOS) values obtained by QUS with bone mineral density (BMD) as measured by the gold standard method, dual energy X-ray absorptiometry (DEXA), and to investigate the diagnostic value of QUS to define low BMD. Materials and Methods One hundred twenty-two postmenopausal women having prior standard DEXA measurements were included in the study. Spine and proximal femur (neck, trochanter and Ward's triangle) BMD were assessed in a standard protocol by DEXA. The middle point of the right tibia was chosen for SOS measurement by tibial QUS. Results The SOS values were observed to be significantly higher in the normal BMD (t score > - 1) group at all measurement sites except for the lumbar region, when compared with the low BMD group (t score < - 1). SOS was negatively correlated with age (r = - 0.66) and month since menopause (r = - 0.57). The sensitivity, specificity, and positive and negative predictive values for QUS t score to diagnose low BMD did not seem to be satisfactory at either of the measurement sites. Conclusion Tibial SOS was correlated weakly with BMD values of femur and lumbar spine as measured by DEXA and its diagnostic value did not seem to be high for discriminating between normal and low BMD, at these sites. PMID:18581594

  5. Antidiabetic, antihyperlipidemic, and antioxidant activities of Musa balbisiana Colla. in Type 1 diabetic rats

    PubMed Central

    Borah, Mukundam; Das, Swarnamoni

    2017-01-01

    Objectives: To evaluate the antidiabetic, antihyperlipidemic, and antioxidant activities of the ethanolic extracts of the flowers and inflorescence stalk of Musa balbisiana Colla. in streptozotocin (STZ)-induced Type 1 diabetic rats. Materials and Methods: Diabetes was induced in male Wistar albino rats (150–200 g) by single intraperitoneal injection of STZ (60 mg/kg b.w. i.p.). Albino rats (n = 25) were divided into five groups, of which five animals each. Group A (normal control) and Group B (diabetic control) received normal saline (10 ml/kg/day p.o.), whereas Group C and Group D received 250 mg/kg/day p.o. of flower and inflorescence stalk ethanolic extracts, respectively, for 2 weeks. Group E (diabetic standard) received 6 U/kg/day s.c of Neutral Protamine Hagedorn insulin. Fasting blood sugar, serum insulin, catalase (CAT), malondialdehyde (MDA), and serum lipid profile were estimated at specific intervals of time. Effect of the extracts on intestinal glucose absorption was also evaluated to know the probable mechanism of action. Results: Diabetic control exhibited significant increase in blood glucose, serum cholesterol, triglycerides, low-density lipoprotein, serum MDA levels and decreased serum CAT, and high-density lipoprotein levels which were significantly reverted by flower and inflorescence stalk ethanolic extracts after 2 weeks. Serum insulin levels were in increased (P < 0.05), and intestinal glucose absorption decreased significantly (P < 0.01) in extract-treated groups. Conclusion: Flower and inflorescence stalk of M. balbisiana Colla. possess significant antidiabetic, antihyperlipidemic, and antioxidant activities in STZ-induced Type 1 diabetic rats. PMID:28458426

  6. The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud

    NASA Astrophysics Data System (ADS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.

  7. Replication of Low Density Electroformed Normal Incidence Optics

    NASA Technical Reports Server (NTRS)

    Ritter, Joseph M.; Burdine, Robert (Technical Monitor)

    2001-01-01

    Replicated electroformed light-weight nickel alloy mirrors can have high strength, low areal density (less than 3kg/m2), smooth finish, and controllable alloy composition. Progress at NASA MSFC SOMTC in developing normal incidence replicated Nickel mirrors will be reported.

  8. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  9. Nonisotropic turbulence: A turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Liu, Kunlun

    2005-11-01

    The probability density function (PDF) and the two-point correlations of a flat-plate turbulent boundary layer subjected to the zero pressure gradient have been calculated by the direct numerical simulation. It is known that the strong shear force near the wall will deform the vortices and develop some stretched coherent structures like streaks and hairpins, which eventually cause the nonisotropy of wall shear flows. The PDF and the two-point correlations of isotropic flows have been studied for a long time. However, our knowledge about the influence of shear force on the PDF and two-point correlations is still very limited. This study is intended to investigate such influence by using a numerical simulation. Results are presented for a case having a Mach number of M=0.1 and a Reynolds number 2000, based on displacement thickness. The results indicate that the PDF of the streamwise velocity is Lognormal, the PDF of normal velocity is approximately Cauchy, and the PDF of the spanwise velocity is nearly Gaussian. The mean and variance of those PDFs vary according to the distance from the wall. And the two-point correlations are homogenous in the spanwise direction, have a slightly variation in the streamwise direction, but change a lot in the normal direction. Rww or Rvv can be represented as elliptic balls. And the well-chosen normalized system can enable Rww and Rvv to be self-similar.

  10. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  11. Primary Epstein-Barr virus infection and probable parvovirus B19 reactivation resulting in fulminant hepatitis and fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis.

    PubMed

    Karrasch, Matthias; Felber, Jörg; Keller, Peter M; Kletta, Christine; Egerer, Renate; Bohnert, Jürgen; Hermann, Beate; Pfister, Wolfgang; Theis, Bernhard; Petersen, Iver; Stallmach, Andreas; Baier, Michael

    2014-11-01

    A case of primary Epstein-Barr virus (EBV) infection/parvovirus B19 reactivation fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis (HLH) is presented. Despite two coinciding viral infections, massive splenomegaly, and fulminant hepatitis, the patient had a good clinical outcome, probably due to an early onset form of HLH with normal leukocyte count, normal natural killer (NK) cell function, and a lack of hemophagocytosis.

  12. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    NASA Astrophysics Data System (ADS)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  13. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  14. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  15. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  16. Investigation of the Fe{sup 3+} centers in perovskite KMgF{sub 3} through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emül, Y.; Department of Software Engineering, Cumhuriyet University, 58140 Sivas; Erbahar, D.

    2015-08-14

    Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}Omore » cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.« less

  17. A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Gao, Rui

    2014-12-01

    Spectrum sensing is an essential component to realize the cognitive radio, and the requirement for real-time spectrum sensing in the case of lacking prior information, fading channel, and noise uncertainty, indeed poses a major challenge to the classical spectrum sensing algorithms. Based on the stochastic properties of scalar transformation of power spectral density (PSD), a novel spectrum sensing algorithm, referred to as the power spectral density split cancellation method (PSC), is proposed in this paper. The PSC makes use of a scalar value as a test statistic, which is the ratio of each subband power to the full band power. Besides, by exploiting the asymptotic normality and independence of Fourier transform, the distribution of the ratio and the mathematical expressions for the probabilities of false alarm and detection in different channel models are derived. Further, the exact closed-form expression of decision threshold is calculated in accordance with Neyman—Pearson criterion. Analytical and simulation results show that the PSC is invulnerable to noise uncertainty, and can achive excellent detection performance without prior knowledge in additive white Gaussian noise and flat slow fading channels. In addition, the PSC benefits from a low computational cost, which can be completed in microseconds.

  18. Vulnerability of reactive skin to electric current perception--a pilot study implicating mast cells and the lymphatic microvasculature.

    PubMed

    Quatresooz, Pascale; Piérard-Franchimont, Claudine; Piérard, Gérald E

    2009-09-01

    Sensitive/reactive skin is regarded as a manifestation of sensory irritation. This susceptibility condition to various exogenous factors suggests the intervention of some neuropeptides and other neurobiological mediators. Mast cells are among the putative implicated cells. The present immunohistochemical and morphometric study was performed on two groups of 36 gender- and age-matched subjects complaining or not from reactive skin as determined by electric current perception. In the mid upper part of the dermis, the numerical density in mast cells and the size of the microvasculature were assessed distinguishing the blood and lymphatic vessels. Globally, the distributions of data were large in reactive skin. This condition was characterized by a prominent increase in both the numerical density in mast cells and the overall size of the lymphatics. By contrast, no difference was found in the size of cutaneous blood vessels. More precisely, it appeared that a subgroup of people with reactive skin exhibited these changes contrasting with some other individuals whose data remained close to the normal range. Mast cells and lymphatics are probably involved in the process of sensory irritation affecting a subgroup of the population.

  19. Global Patterns of Lightning Properties Derived by OTD and LIS

    NASA Technical Reports Server (NTRS)

    Beirle, Steffen; Koshak, W.; Blakeslee, R.; Wagner, T.

    2014-01-01

    The satellite instruments Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) provide unique empirical data about the frequency of lightning flashes around the globe (OTD), and the tropics (LIS), which 5 has been used before to compile a well received global climatology of flash rate densities. Here we present a statistical analysis of various additional lightning properties derived from OTD/LIS, i.e. the number of so-called "events" and "groups" per flash, as well as 10 the mean flash duration, footprint and radiance. These normalized quantities, which can be associated with the flash "strength", show consistent spatial patterns; most strikingly, oceanic flashes show higher values than continental flashes for all properties. Over land, regions with high (Eastern US) 15 and low (India) flash strength can be clearly identified. We discuss possible causes and implications of the observed regional differences. Although a direct quantitative interpretation of the investigated flash properties is difficult, the observed spatial patterns provide valuable information for the 20 interpretation and application of climatological flash rates. Due to the systematic regional variations of physical flash characteristics, viewing conditions, and/or measurement sensitivities, parametrisations of lightning NOx based on total flash rate densities alone are probably affected by regional biases.

  20. Density Functional Theory Study of Cyanoetheneselenol: A Molecule of Astrobiological Interest

    NASA Astrophysics Data System (ADS)

    Surajbali, P.; Ramanah, D. Kodi; Rhyman, L.; Alswaidan, I. A.; Fun, H.-K.; Somanah, R.; Ramasami, P.

    2015-12-01

    The interstellar medium has a rich chemistry which involves a wide variety of molecules. Of particular interest are molecules that have a link to prebiotic chemistry which hold the key to understanding of our origins. On the basis of suggestions that selenium may have been involved in the origin and evolution of life, we have studied the selenium analogue of cyanoethenethiol, namely the novel cyanoetheneselenol. Cyanoetheneselenol exhibits conformational and geometrical isomerism. This theoretical work deals with the study of four forms of cyanoetheneselenol in terms of their structural, spectroscopic and thermodynamic parameters. All computations were performed using density functional theory method with the B3LYP functional and the Pople basis set, 6-311 + G(d,p), for all atoms. The relative stability of the four isomers of cyanoetheneselenol was obtained and interpreted. The infrared spectra were generated and assignment of the normal modes of vibration was performed. Probable regions of detection, proposed on the basis of parameters obtained from this study for the four isomers, include comets, the molecular cloud: Sagittarius B2(N), and planetary atmospheres. The molecular and spectroscopic parameters should be useful for future identification of the astrobiological molecule cyanoetheneselenol and the development of the Square Kilometre Array.

  1. Superfluid density and carrier concentration across a superconducting dome: The case of strontium titanate

    NASA Astrophysics Data System (ADS)

    Collignon, Clément; Fauqué, Benoît; Cavanna, Antonella; Gennser, Ulf; Mailly, Dominique; Behnia, Kamran

    2017-12-01

    We present a study of the lower critical field, Hc 1, of SrTi1 -xNbxO3 as a function of carrier concentration with the aim of quantifying the superfluid density. At low carrier concentration (i.e., the underdoped side), superfluid density and the carrier concentration in the normal state are equal within experimental margin. A significant deviation between the two numbers starts at optimal doping and gradually increases with doping. The inverse of the penetration depth and the critical temperature follow parallel evolutions as in the case of cuprate superconductors. In the overdoped regime, the zero-temperature superfluid density becomes much lower than the normal-state carrier density before vanishing all together. We show that the density mismatch and the clean-to-dirty crossover are concomitant. Our results imply that the discrepancy between normal and superconducting densities is expected whenever the superconducting gap becomes small enough to put the system in the dirty limit. A quantitative test of the dirty BCS theory is not straightforward, due to the multiplicity of the bands in superconducting strontium titanate.

  2. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    PubMed

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  4. Usefulness of the novel risk estimation software, Heart Risk View, for the prediction of cardiac events in patients with normal myocardial perfusion SPECT.

    PubMed

    Sakatani, Tomohiko; Shimoo, Satoshi; Takamatsu, Kazuaki; Kyodo, Atsushi; Tsuji, Yumika; Mera, Kayoko; Koide, Masahiro; Isodono, Koji; Tsubakimoto, Yoshinori; Matsuo, Akiko; Inoue, Keiji; Fujita, Hiroshi

    2016-12-01

    Myocardial perfusion single-photon emission-computed tomography (SPECT) can predict cardiac events in patients with coronary artery disease with high accuracy; however, pseudo-negative cases sometimes occur. Heart Risk View, which is based on the prospective cohort study (J-ACCESS), is a software for evaluating cardiac event probability. We examined whether Heart Risk View was useful to evaluate the cardiac risk in patients with normal myocardial perfusion SPECT (MPS). We studied 3461 consecutive patients who underwent MPS to detect myocardial ischemia and those who had normal MPS were enrolled in this study (n = 698). We calculated cardiac event probability by Heart Risk View and followed-up for 3.8 ± 2.4 years. The cardiac events were defined as cardiac death, non-fatal myocardial infarction, and heart failure requiring hospitalization. During the follow-up period, 21 patients (3.0 %) had cardiac events. The event probability calculated by Heart Risk View was higher in the event group (5.5 ± 2.6 vs. 2.9 ± 2.6 %, p < 0.001). According to the receiver-operating characteristics curve, the cut-off point of the event probability for predicting cardiac events was 3.4 % (sensitivity 0.76, specificity 0.72, and AUC 0.85). Kaplan-Meier curves revealed that a higher event rate was observed in the high-event probability group by the log-rank test (p < 0.001). Although myocardial perfusion SPECT is useful for the prediction of cardiac events, risk estimation by Heart Risk View adds more prognostic information, especially in patients with normal MPS.

  5. Microdosimetric Analysis Confirms Similar Biological Effectiveness of External Exposure to Gamma-Rays and Internal Exposure to 137Cs, 134Cs, and 131I

    PubMed Central

    Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki

    2014-01-01

    The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099

  6. Are CO Observations of Interstellar Clouds Tracing the H2?

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.

    2010-01-01

    Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.

  7. Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2002-01-01

    A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...

  8. Properties of Traffic Risk Coefficient

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu

    2009-10-01

    We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.

  9. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  10. Comparison of sticking probabilities of metal atoms in magnetron sputtering deposition of CuZnSnS films

    NASA Astrophysics Data System (ADS)

    Sasaki, K.; Kikuchi, S.

    2014-10-01

    In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.

  11. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  12. High-sensitivity terahertz imaging of traumatic brain injury in a rat model

    NASA Astrophysics Data System (ADS)

    Zhao, Hengli; Wang, Yuye; Chen, Linyu; Shi, Jia; Ma, Kang; Tang, Longhuang; Xu, Degang; Yao, Jianquan; Feng, Hua; Chen, Tunan

    2018-03-01

    We demonstrated that different degrees of experimental traumatic brain injury (TBI) can be differentiated clearly in fresh slices of rat brain tissues using transmission-type terahertz (THz) imaging system. The high absorption region in THz images corresponded well with the injured area in visible images and magnetic resonance imaging results. The THz image and absorption characteristics of dehydrated paraffin-embedded brain slices and the hematoxylin and eosin (H&E)-stained microscopic images were investigated to account for the intrinsic differences in the THz images for the brain tissues suffered from different degrees of TBI and normal tissue aside from water. The THz absorption coefficients of rat brain tissues showed an increase in the aggravation of brain damage, particularly in the high-frequency range, whereas the cell density decreased as the order of mild, moderate, and severe TBI tissues compared with the normal tissue. Our results indicated that the different degrees of TBI were distinguishable owing to the different water contents and probable hematoma components distribution rather than intrinsic cell intensity. These promising results suggest that THz imaging has great potential as an alternative method for the fast diagnosis of TBI.

  13. The effects of nutrition, puberty and dancing on bone density in adolescent ballet dancers.

    PubMed

    Burckhardt, Peter; Wynn, Emma; Krieg, Marc-Antoine; Bagutti, Carlo; Faouzi, Mohamed

    2011-06-01

    Ballet dancers have on average a low bone mineral content (BMC), with elevated fracture-risk, low body mass index (BMI) for age (body mass index, kg/m2), low energy intake, and delayed puberty. This study aims at a better understanding of the interactions of these factors, especially with regard to nutrition. During a competition for pre-professional dancers we examined 127 female participants (60 Asians, 67 Caucasians). They averaged 16.7 years of age, started dancing at 5.8 years, and danced 22 hours/week. Assessments were made for BMI, BMC (DXA), and bone mineral apparent density (BMAD) at the lumbar spine and femoral neck, pubertal stage (Tanner score), and nutritional status (EAT-40 questionnaire and a qualitative three-day dietary record). BMI for age was found to be normal in only 42.5% of the dancers, while 15.7% had a more or less severe degree of thinness (12.6% Grade2 and 3.1% Grade 3 thinness). Menarche was late (13.9 years, range 11 to 16.8 years). Food intake, evaluated by number of consumed food portions, was below the recommendations for a normally active population in all food groups except animal proteins, where the intake was more than twice the recommended amount. In this population, with low BMI and intense exercise, BMC was low and associated with nutritional factors; dairy products had a positive and non-dairy proteins a negative influence. A positive correlation between BMAD and years since menarche confirmed the importance of exposure to estrogens and the negative impact of delayed puberty. Because of this and the probable negative influence of a high intake of non-dairy proteins, such as meat, fish, and eggs, and the positive association with a high dairy intake, ballet schools should promote balanced diets and normal weight and should recognize and help dancers avoid eating disorders and delayed puberty caused by extensive dancing and inadequate nutrition.

  14. Statistical analysis of dislocations and dislocation boundaries from EBSD data.

    PubMed

    Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N

    2017-08-01

    Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.

  16. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  17. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  18. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  19. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  20. Precision Composite Space Structures

    DTIC Science & Technology

    2007-10-15

    large structures. 15. SUBJECT TERMS Composite materials, dimensional stability, microcracking, thermal expansion , space structures, degradation...Figure 32. Variation of normalized coefficients of thermal expansion α11(n), α22(n), and α33(n) with normalized crack density of an AS4/3501-6...coefficients of thermal expansion α11(n), α22(n), and α33(n) with normalized crack density of an AS4/3501-6 composite lamina with a fiber volume

  1. Direct numerical simulation of flow over dissimilar, randomly distributed roughness elements: A systematic study on the effect of surface morphology on turbulence

    NASA Astrophysics Data System (ADS)

    Forooghi, Pourya; Stroh, Alexander; Schlatter, Philipp; Frohnapfel, Bettina

    2018-04-01

    Direct numerical simulations are used to investigate turbulent flow in rough channels, in which topographical parameters of the rough wall are systematically varied at a fixed friction Reynolds number of 500, based on a mean channel half-height h and friction velocity. The utilized roughness generation approach allows independent variation of moments of the surface height probability distribution function [thus root-mean-square (rms) surface height, skewness, and kurtosis], surface mean slope, and standard deviation of the roughness peak sizes. Particular attention is paid to the effect of the parameter Δ defined as the normalized height difference between the highest and lowest roughness peaks. This parameter is used to understand the trends of the investigated flow variables with departure from the idealized case where all roughness elements have the same height (Δ =0 ). All calculations are done in the fully rough regime and for surfaces with high slope (effective slope equal to 0.6-0.9). The rms roughness height is fixed for all cases at 0.045 h and the skewness and kurtosis of the surface height probability density function vary in the ranges -0.33 to 0.67 and 1.9 to 2.6, respectively. The goal of the paper is twofold: first, to investigate the possible effect of topographical parameters on the mean turbulent flow, Reynolds, and dispersive stresses particularly in the vicinity of the roughness crest, and second, to investigate the possibility of using the wall-normal turbulence intensity as a physical parameter for parametrization of the flow. Such a possibility, already suggested for regular roughness in the literature, is here extended to irregular roughness.

  2. Clinicians' perceptions of the value of ventilation-perfusion scans.

    PubMed

    Siegel, Alan; Holtzman, Stephen R; Bettmann, Michael A; Black, William C

    2004-07-01

    The goal of this investigation was to understand clinicians' perceptions of the probability of pulmonary embolism as a function of V/Q scan results of normal, low, intermediate, and high probability. A questionnaire was developed and distributed to 429 clinicians at a single academic medical center. The response rate was 44% (188 of 429). The questions included level of training, specialty, probability of PE given 1 of the 4 V/Q scan results, and estimations of the charges for V/Q scanning and pulmonary angiography, and estimations of the risks of pulmonary angiography. The medians and ranges for the probability of pulmonary embolism given a normal, low, intermediate, and high probability V/Q scan result were 2.5% (0-30), 12.5% (0.5-52.5), 41.25% (5-75), and 85% (5-100), respectively. Eleven percent (21 of 188) of the respondents listed the probability of PE in patients with a low probability V/Q scan as being 5% or less, and 33% (62 of 188) listed the probability of PE given an intermediate probability scan as 50% or greater. The majority correctly identified the rate of serious complications of pulmonary arteriography, but many respondents underestimated the charge for V/Q scans and pulmonary arteriography. A substantial minority of clinicians do not understand the probability of pulmonary embolism in patients with low and intermediate probability ventilation-perfusion scans. More quantitative reporting of results is recommended. This could be particularly important because VQ scans are used less frequently but are still needed in certain clinical situations.

  3. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  4. Tuning sensitivity of CAR to EGFR density limits recognition of normal tissue while maintaining potent anti-tumor activity

    PubMed Central

    Caruso, Hillary G.; Hurton, Lenka V.; Najjar, Amer; Rushworth, David; Ang, Sonny; Olivares, Simon; Mi, Tiejuan; Switzer, Kirsten; Singh, Harjeet; Huls, Helen; Lee, Dean A.; Heimberger, Amy B.; Champlin, Richard E.; Cooper, Laurence J. N.

    2015-01-01

    Many tumors over express tumor-associated antigens relative to normal tissue, such as epidermal growth factor receptor (EGFR). This limits targeting by human T cells modified to express chimeric antigen receptors (CARs) due to potential for deleterious recognition of normal cells. We sought to generate CAR+ T cells capable of distinguishing malignant from normal cells based on the disparate density of EGFR expression by generating two CARs from monoclonal antibodies which differ in affinity. T cells with low affinity Nimo-CAR selectively targeted cells over-expressing EGFR, but exhibited diminished effector function as the density of EGFR decreased. In contrast, the activation of T cells bearing high affinity Cetux-CAR was not impacted by the density of EGFR. In summary, we describe the generation of CARs able to tune T-cell activity to the level of EGFR expression in which a CAR with reduced affinity enabled T cells to distinguish malignant from non-malignant cells. PMID:26330164

  5. Probabilities of Dilating Vesicoureteral Reflux in Children with First Time Simple Febrile Urinary Tract Infection, and Normal Renal and Bladder Ultrasound.

    PubMed

    Rianthavorn, Pornpimol; Tangngamsakul, Onjira

    2016-11-01

    We evaluated risk factors and assessed predicted probabilities for grade III or higher vesicoureteral reflux (dilating reflux) in children with a first simple febrile urinary tract infection and normal renal and bladder ultrasound. Data for 167 children 2 to 72 months old with a first febrile urinary tract infection and normal ultrasound were compared between those who had dilating vesicoureteral reflux (12 patients, 7.2%) and those who did not. Exclusion criteria consisted of history of prenatal hydronephrosis or familial reflux and complicated urinary tract infection. The logistic regression model was used to identify independent variables associated with dilating reflux. Predicted probabilities for dilating reflux were assessed. Patient age and prevalence of nonEscherichia coli bacteria were greater in children who had dilating reflux compared to those who did not (p = 0.02 and p = 0.004, respectively). Gender distribution was similar between the 2 groups (p = 0.08). In multivariate analysis older age and nonE. coli bacteria independently predicted dilating reflux, with odds ratios of 1.04 (95% CI 1.01-1.07, p = 0.02) and 3.76 (95% CI 1.05-13.39, p = 0.04), respectively. The impact of nonE. coli bacteria on predicted probabilities of dilating reflux increased with patient age. We support the concept of selective voiding cystourethrogram in children with a first simple febrile urinary tract infection and normal ultrasound. Voiding cystourethrogram should be considered in children with late onset urinary tract infection due to nonE. coli bacteria since they are at risk for dilating reflux even if the ultrasound is normal. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  6. Quantum mechanical probability current as electromagnetic 4-current from topological EM fields

    NASA Astrophysics Data System (ADS)

    van der Mark, Martin B.

    2015-09-01

    Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.

  7. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  8. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  9. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  10. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.

  11. Variation of the fractal dimension anisotropy of two major Cenozoic normal fault systems over space and time around the Snake River Plain, Idaho and SW Montana

    NASA Astrophysics Data System (ADS)

    Davarpanah, A.; Babaie, H. A.

    2012-12-01

    The interaction of the thermally induced stress field of the Yellowstone hotspot (YHS) with existing Basin and Range (BR) fault blocks, over the past 17 m.y., has produced a new, spatially and temporally variable system of normal faults around the Snake River Plain (SRP) in Idaho and Wyoming-Montana area. Data about the trace of these new cross faults (CF) and older BR normal faults were acquired from a combination of satellite imageries, DEM, and USGS geological maps and databases at scales of 1:24,000, 1:100,000, 1:250,000, 1:1000, 000, and 1:2,500, 000, and classified based on their azimuth in ArcGIS 10. The box-counting fractal dimension (Db) of the BR fault traces, determined applying the Benoit software, and the anisotropy intensity (ellipticity) of the fractal dimensions, measured with the modified Cantor dust method applying the AMOCADO software, were measured in two large spatial domains (I and II). The Db and anisotropy of the cross faults were studied in five temporal domains (T1-T5) classified based on the geologic age of successive eruptive centers (12 Ma to recent) of the YHS along the eastern SRP. The fractal anisotropy of the CF system in each temporal domain was also spatially determined in the southern part (domain S1), central part (domain S2), and northern part (domain S3) of the SRP. Line (fault trace) density maps for the BR and CF polylines reveal a higher linear density (trace length per unit area) for the BR traces in the spatial domain I, and a higher linear density of the CF traces around the present Yellowstone National Park (S1T5) where most of the seismically active faults are located. Our spatio-temporal analysis reveals that the fractal dimension of the BR system in domain I (Db=1.423) is greater than that in domain II (Db=1.307). It also shows that the anisotropy of the fractal dimension in domain I is less eccentric (axial ratio: 1.242) than that in domain II (1.355), probably reflecting the greater variation in the trend of the BR system in domain I. The CF system in the S1T5 domain has the highest fractal dimension (Db=1.37) and the lowest anisotropy eccentricity (1.23) among the five temporal domains. These values positively correlate with the observed maxima on the fault trace density maps. The major axis of the anisotropy ellipses is consistently perpendicular to the average trend of the normal fault system in each domain, and therefore approximates the orientation of extension for normal faulting in each domain. This fact gives a NE-SW and NW-SE extension direction for the BR system in domains I and II, respectively. The observed NE-SW orientation of the major axes of the anisotropy ellipses in the youngest T4 and T5 temporal domains, oriented perpendicular to the mean trend of the normal faults in the these domains, suggests extension along the NE-SW direction for cross faulting in these areas. The spatial trajectories (form lines) of the minor axes of the anisotropy ellipses, and the mean trend of fault traces in the T4 and T5 temporal domains, define a large parabolic pattern about the axis of the eastern SRP, with its apex at the Yellowstone plateau.

  12. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  13. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  14. Causal illusions in children when the outcome is frequent

    PubMed Central

    2017-01-01

    Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294

  15. Twin density of aragonite in molluscan shells characterized using X-ray diffraction and transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi

    2014-07-01

    {110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.

  16. Concrete density estimation by rebound hammer method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Mohamad Pauzi bin, E-mail: pauzi@nm.gov.my; Masenwat, Noor Azreen bin; Sani, Suhairy bin

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containingmore » crushed granite.« less

  17. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  18. Flow and Transport in Highly Heterogeneous Porous Formations: Numerical Experiments Performed Using the Analytic Element Method

    NASA Astrophysics Data System (ADS)

    Jankovic, I.

    2002-05-01

    Flow and transport in porous formations are analyzed using numerical simulations. Hydraulic conductivity is treated as a spatial random function characterized by a probability density function and a two-point covariance function. Simulations are performed for a multi-indicator conductivity structure developed by Gedeon Dagan (personal communication). This conductivity structure contains inhomogeneities (inclusions) of elliptical and ellipsoidal geometry that are embedded in a homogeneous background. By varying the distribution of sizes and conductivities of inclusions, any probability density function and two-point covariance may be reproduced. The multi-indicator structure is selected since it yields simple approximate transport solutions (Aldo Fiori, personal communication) and accurate numerical solutions (based on the Analytic Element Method). The dispersion is examined for two conceptual models. Both models are based on the multi-indicator conductivity structure. The first model is designed to examine dispersion in aquifers with continuously varying conductivity. The inclusions in this model cover as much area/volume of the porous formation as possible. The second model is designed for aquifers that contain clay/sand/gravel lenses embedded in otherwise homogeneous background. The dispersion in both aquifer types is simulated numerically. Simulation results are compared to those obtained using simple approximate solutions. In order to infer transport statistics that are representative of an infinite domain using the numerical experiments, the inclusions are placed in a domain that was shaped as a large ellipse (2D) and a large spheroid (3D) that were submerged in an unbounded homogeneous medium. On a large scale, the large body of inclusions behaves like a single large inhomogeneity. The analytic solution for a uniform flow past the single inhomogeneity of such geometry yields uniform velocity inside the domain. The velocity differs from that at infinity and can be used to infer the effective conductivity of the medium. As many as 100,000 inhomogeneities are placed inside the domain for 2D simulations. Simulations in 3D were limited to 50,000 inclusions. A large number of simulations was conducted on a massively parallel supercomputer cluster at the Center for Computational Research, University at Buffalo. Simulations range from mildly heterogeneous formations to highly heterogeneous formations (variance of the logarithm of conductivity equal to 10) and from sparsely populated systems to systems where inhomogeneities cover 95% of the volume. Particles are released and tracked inside the core of constant mean velocity. Following the particle tracking, various medium, flow, and transport statistics are computed. These include: spatial moments of particle positions, probability density function of hydraulic conductivity and each component of velocity, their two-point covariance function in the direction of flow and normal to it, covariance of Lagrangean velocities, and probability density function of travel times to various break-through locations. Following the analytic nature of the flow solution, all the results are presented in dimensionless forms. For example, the dispersion coefficients are made dimensionless with respect to the mean velocity and size of inhomogeneities. Detailed results will be presented and compared to well known first-order results and the results that are based on simple approximate transport solutions of Aldo Fiori.

  19. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  20. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

Top