Sample records for weighted probability density

  1. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  2. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  3. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A prominent large high-density lipoprotein at birth enriched in apolipoprotein C-I identifies a new group of infancts of lower birth weight and younger gestational age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwiterovich Jr., Peter O.; Cockrill, Steven L.; Virgil, Donna G.

    2003-10-01

    Because low birth weight is associated with adverse cardiovascular risk and death in adults, lipoprotein heterogeneity at birth was studied. A prominent, large high-density lipoprotein (HDL) subclass enriched in apolipoprotein C-I (apoC-I) was found in 19 percent of infants, who had significantly lower birth weights and younger gestational ages and distinctly different lipoprotein profiles than infants with undetectable, possible or probable amounts of apoC-I-enriched HDL. An elevated amount of an apoC-I-enriched HDL identifies a new group of low birth weight infants.

  5. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  6. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  7. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  8. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  9. Innocent until proven guilty? Stable coexistence of alien rainbow trout and native marble trout in a Slovenian stream

    NASA Astrophysics Data System (ADS)

    Vincenzi, Simone; Crivelli, Alain J.; Jesensek, Dusan; Rossi, Gianluigi; de Leo, Giulio A.

    2011-01-01

    To understand the consequences of the invasion of the nonnative rainbow trout Oncorhynchus mykiss on the native marble trout Salmo marmoratus, we compared two distinct headwater sectors where marble trout occur in allopatry (MTa) or sympatry (MTs) with rainbow trout (RTs) in the Idrijca River (Slovenia). Using data from field surveys from 2002 to 2009, with biannual (June and September) sampling and tagging from June 2004 onwards, we analyzed body growth and survival probabilities of marble trout in each stream sector. Density of age-0 in September over the study period was greater for MTs than MTa and very similar between MTs and RTs, while density of trout ≥age-1 was similar for MTa and MTs and greater than density of RTs. Monthly apparent survival probabilities were slightly higher in MTa than in MTs, while RTs showed a lower survival than MTs. Mean weight of marble and rainbow trout aged 0+ in September was negatively related to cohort density for both marble and rainbow trout, but the relationship was not significantly different between MTs and MTa. No clear depression of body growth of sympatric marble trout between sampling intervals was observed. Despite a later emergence, mean weight of RTs cohorts at age 0+ in September was significantly higher than weight of both MTs and MTa. The establishment of a self-sustaining population of rainbow trout does not have a significant impact on body growth and survival probabilities of sympatric marble trout. The numerical dominance of rainbow trout in streams at lower altitudes seem to suggest that while the low summer flow pattern of Slovenian streams is favorable for rainbow trout invasion, the adaptation of marble trout to headwater environments may limit the invasion success of rainbow trout in headwaters.

  10. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  11. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  12. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  14. Exposure to traffic-related air pollution during pregnancy and term low birth weight: estimation of causal associations in a semiparametric model.

    PubMed

    Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B

    2012-11-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants.

  15. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  16. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  17. Topology of two-dimensional turbulent flows of dust and gas

    NASA Astrophysics Data System (ADS)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, St<1 . We solve for the particles using both a Lagrangian and an Eulerian approach (with a shock-capturing scheme). In the latter, the particles are described by a dust-density field and a dust-velocity field. We find the following: the dust-density field in our Eulerian simulations has the same correlation dimension d2 as obtained from the clustering of particles in the Lagrangian simulations for St<1 ; the cumulative probability distribution function of the dust density coarse grained over a scale r , in the inertial range, has a left tail with a power-law falloff indicating the presence of voids; the energy spectrum of the dust velocity has a power-law range with an exponent that is the same as the gas-velocity spectrum except at very high Fourier modes; the compressibility of the dust-velocity field is proportional to St2. We quantify the topological properties of the dust velocity and the gas velocity through their gradient matrices, called A and B , respectively. Our DNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antolin, J.; Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, ES-18071 Granada; Bouvrie, P. A.

    An alternative one-parameter measure of divergence is proposed, quantifying the discrepancy among general probability densities. Its main mathematical properties include (i) comparison among an arbitrary number of functions, (ii) the possibility of assigning different weights to each function according to its relevance on the comparative procedure, and (iii) ability to modify the relative contribution of different regions within the domain. Applications to the study of atomic density functions, in both conjugated spaces, show the versatility and universality of this divergence.

  19. The effect of weight change on changes in breast density measures over menopause in a breast cancer screening cohort.

    PubMed

    Wanders, Johanna Olga Pauline; Bakker, Marije Fokje; Veldhuis, Wouter Bernard; Peeters, Petra Huberdina Maria; van Gils, Carla Henrica

    2015-05-30

    High weight and high percentage mammographic breast density are both breast cancer risk factors but are negatively correlated. Therefore, we wanted to obtain more insight into this apparent paradox. We investigated in a longitudinal study how weight change over menopause is related to changes in mammographic breast features. Five hundred ninety-one participants of the EPIC-NL cohort were divided into three groups according to their prospectively measured weight change over menopause: (1) weight loss (more than -3.0 %), (2) stable weight (between -3.0 % and +3.0 %), and (3) weight gain (more than 3.0 %). SPSS GLM univariate analysis was used to determine both the mean breast measure changes in, and the trend over, the weight change groups. Over a median period of 5 years, the mean changes in percent density in these groups were -5.0 % (95 % confidence interval (CI) -8.0; -2.1), -6.8 % (95 % CI -9.0; -4.5), and -10.2 % (95 % CI -12.5; -7.9), respectively (P-trend = 0.001). The mean changes in dense area were -16.7 cm(2) (95 % CI -20.1; -13.4), -16.4 cm(2) (95 % CI -18.9; -13.9), and -18.1 cm(2) (95 % CI -20.6; -15.5), respectively (P-trend = 0.437). Finally, the mean changes in nondense area were -6.1 cm(2) (95 % CI -11.9; -0.4), -0.6 cm(2) (95 % CI -4.9; 3.8), and 5.3 cm(2) (95 % CI 0.9; 9.8), respectively (P-trend < 0.001). Going through menopause is associated with a decrease in both percent density and dense area. Owing to an increase in the nondense tissue, the decrease in percent density is largest in women who gain weight. The decrease in dense area is not related to weight change. So the fact that both high percent density and high weight or weight gain are associated with high postmenopausal breast cancer risk can probably not be explained by an increase (or slower decrease) of dense area in women gaining weight compared with women losing weight or maintaining a stable weight. These results suggest that weight and dense area are presumably two independent postmenopausal breast cancer risk factors.

  20. Exposure to Traffic-related Air Pollution During Pregnancy and Term Low Birth Weight: Estimation of Causal Associations in a Semiparametric Model

    PubMed Central

    Padula, Amy M.; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B.

    2012-01-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000–2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474

  1. Direct-method SAD phasing with partial-structure iteration: towards automation.

    PubMed

    Wang, J W; Chen, J R; Gu, Y X; Zheng, C D; Fan, H F

    2004-11-01

    The probability formula of direct-method SAD (single-wavelength anomalous diffraction) phasing proposed by Fan & Gu (1985, Acta Cryst. A41, 280-284) contains partial-structure information in the form of a Sim-weighting term. Previously, only the substructure of anomalous scatterers has been included in this term. In the case that the subsequent density modification and model building yields only structure fragments, which do not straightforwardly lead to the complete solution, the partial structure can be fed back into the Sim-weighting term of the probability formula in order to strengthen its phasing power and to benefit the subsequent automatic model building. The procedure has been tested with experimental SAD data from two known proteins with copper and sulfur as the anomalous scatterers.

  2. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  3. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  4. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  5. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  6. Parameterization of In-Cloud Aerosol Scavenging Due To Atmospheric Ionization: 2. Effects of Varying Particle Density

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Tinsley, Brian A.

    2018-03-01

    Simulations and parameterization of collision rate coefficients for aerosol particles with 3 μm radius droplets have been extended to a range of particle densities up to 2,000 kg m-3 for midtropospheric ( 5 km) conditions (540 hPa, -17°C). The increasing weight has no effect on collisions for particle radii less than 0.2 μm, but for greater radii the weight effect becomes significant and usually decreases the collision rate coefficient. When increasing size and density of particles make the fall speed of the particle relative to undisturbed air approach to that of the droplet, the effect of the particle falling away in the stagnation region ahead of the droplet becomes important, and the probability of frontside collisions can decrease to zero. Collisions on the rear side of the droplet can be enhanced as particle weight increases, and for this the weight effect tends to increase the rate coefficients. For charges on the droplet and for large particles with density ρ < 1,000 kg m-3 the predominant effect increases in rate coefficient due to the short-range attractive image electric force. With density ρ above about 1,000 kg m-3, the stagnation region prevents particles moving close to the droplet and reduces the effect of these short-range forces. Together with previous work, it is now possible to obtain collision rate coefficients for realistic combinations of droplet charge, particle charge, droplet radius, particle radius, particle density, and relative humidity in clouds. The parameterization allows rapid access to these values for use in cloud models.

  7. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  8. Automated segmentation of neuroanatomical structures in multispectral MR microscopy of the mouse brain.

    PubMed

    Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan

    2005-08-15

    We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.

  9. Shielding and activity estimator for template-based nuclide identification methods

    DOEpatents

    Nelson, Karl Einar

    2013-04-09

    According to one embodiment, a method for estimating an activity of one or more radio-nuclides includes receiving one or more templates, the one or more templates corresponding to one or more radio-nuclides which contribute to a probable solution, receiving one or more weighting factors, each weighting factor representing a contribution of one radio-nuclide to the probable solution, computing an effective areal density for each of the one more radio-nuclides, computing an effective atomic number (Z) for each of the one more radio-nuclides, computing an effective metric for each of the one or more radio-nuclides, and computing an estimated activity for each of the one or more radio-nuclides. In other embodiments, computer program products, systems, and other methods are presented for estimating an activity of one or more radio-nuclides.

  10. Psychophysics of the probability weighting function

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(1e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  11. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  12. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  13. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  14. A quadrature based method of moments for nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin L.; Vedula, Prakash

    2011-09-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.

  15. Access to fast food and food prices: relationship with fruit and vegetable consumption and overweight among adolescents.

    PubMed

    Powell, Lisa M; Auld, M Christopher; Chaloupka, Frank J; O'Malley, Patrick M; Johnston, Lloyd D

    2007-01-01

    We examine the extent to which food prices and restaurant outlet density are associated with adolescent fruit and vegetable consumption, body mass index (BMI), and the probability of overweight. We use repeated cross-sections of individual-level data on adolescents from the Monitoring the Future Surveys from 1997 to 2003 combined with fast food and fruit and vegetable prices obtained from the American Chamber of Commerce Researchers Association and fast food and full-service restaurant outlet density measures obtained from Dun & Bradstreet. The results suggest that the price of a fast food meal is an important determinant of adolescents' body weight and eating habits: a 10% increase in the price of a fast food meal leads to a 3.0% increase in the probability of frequent fruit and vegetable consumption, a 0.4% decrease in BMI, and a 5.9% decrease in probability of overweight. The price of fruits and vegetables and restaurant outlet density are less important determinants, although these variables typically have the expected sign and are often statistically associated with our outcome measures. Despite these findings, changes in all observed economic and socio-demographic characteristics together only explain roughly one-quarter of the change in mean BMI and one-fifth of the change in overweight over the 1997-2003 sampling period.

  16. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  17. The internal consistency of the standard gamble: tests after adjusting for prospect theory.

    PubMed

    Oliver, Adam

    2003-07-01

    This article reports a study that tests whether the internal consistency of the standard gamble can be improved upon by incorporating loss weighting and probability transformation parameters in the standard gamble valuation procedure. Five alternatives to the standard EU formulation are considered: (1) probability transformation within an EU framework; and, within a prospect theory framework, (2) loss weighting and full probability transformation, (3) no loss weighting and full probability transformation, (4) loss weighting and no probability transformation, and (5) loss weighting and partial probability transformation. Of the five alternatives, only the prospect theory formulation with loss weighting and no probability transformation offers an improvement in internal consistency over the standard EU valuation procedure.

  18. Birds and insects as radar targets - A review

    NASA Technical Reports Server (NTRS)

    Vaughn, C. R.

    1985-01-01

    A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.

  19. Comparison of amyloid plaque contrast generated by T2-, T2*-, and susceptibility-weighted imaging methods in transgenic mouse models of Alzheimer’s disease

    PubMed Central

    Chamberlain, Ryan; Reyes, Denise; Curran, Geoffrey L.; Marjanska, Malgorzata; Wengenack, Thomas M.; Poduslo, Joseph F.; Garwood, Michael; Jack, Clifford R.

    2009-01-01

    One of the hallmark pathologies of Alzheimer’s disease (AD) is amyloid plaque deposition. Plaques appear hypointense on T2- and T2*-weighted MR images probably due to the presence of endogenous iron, but no quantitative comparison of various imaging techniques has been reported. We estimated the T1, T2, T2*, and proton density values of cortical plaques and normal cortical tissue and analyzed the plaque contrast generated by a collection of T2-, T2*-, and susceptibility-weighted imaging (SWI) methods in ex vivo transgenic mouse specimens. The proton density and T1 values were similar for both cortical plaques and normal cortical tissue. The T2 and T2* values were similar in cortical plaques, which indicates that the iron content of cortical plaques may not be as large as previously thought. Ex vivo plaque contrast was increased compared to a previously reported spin echo sequence by summing multiple echoes and by performing SWI; however, gradient echo and susceptibility weighted imaging was found to be impractical for in vivo imaging due to susceptibility interface-related signal loss in the cortex. PMID:19253386

  20. Comparison of photometric and weight estimation of mycobacterium content in homogeneous BCG cultures containing Tween 80

    PubMed Central

    Schuh, V.; Šír, J.; Galliová, J.; Švandová, E.

    1966-01-01

    A comparison of the weight and photometric methods of primary assay of BCG vaccine has been made, using a vaccine prepared in albumin-free medium but containing Tween 80. In the weight method, the bacteria were trapped on a membrane filter; for photometry a Pulfrich Elpho photometer and an instrument of Czech origin were used. The photometric results were the more precise, provided that the measurements were made within two days of completion of growth; after this time the optical density of the suspension began to decrease slowly. The lack of precision of the weighing method is probably due to the small weight of culture deposit (which was almost on the limit of accuracy of the analytical balance) and to difficulties in the manipulation of the ultrafilter. PMID:5335458

  1. Egg production of turbot, Scophthalmus maximus, in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Nissling, Anders; Florin, Ann-Britt; Thorsen, Anders; Bergström, Ulf

    2013-11-01

    In the brackish water Baltic Sea turbot spawn at ~ 6-9 psu along the coast and on offshore banks in ICES SD 24-29, with salinity influencing the reproductive success. The potential fecundity (the stock of vitellogenic oocytes in the pre-spawning ovary), egg size (diameter and dry weight of artificially fertilized 1-day-old eggs) and gonad dry weight were assessed for fish sampled in SD 25 and SD 28. Multiple regression analysis identified somatic weight, or total length in combination with Fulton's condition factor, as main predictors of fecundity and gonad dry weight with stage of maturity (oocyte packing density or leading cohort) as an additional predictor. For egg size, somatic weight was identified as main predictor while otolith weight (proxy for age) was an additional predictor. Univariate analysis using GLM revealed significantly higher fecundity and gonad dry weight for turbot from SD 28 (3378-3474 oocytes/g somatic weight) compared to those from SD 25 (2343 oocytes/g somatic weight), with no difference in egg size (1.05 ± 0.03 mm diameter and 46.8 ± 6.5 μg dry weight; mean ± sd). The difference in egg production matched egg survival probabilities in relation to salinity conditions suggesting selection for higher fecundity as a consequence of poorer reproductive success at lower salinities. This supports the hypothesis of higher size-specific fecundity towards the limit of the distribution of a species as an adaptation to harsher environmental conditions and lower offspring survival probabilities. Within SD 28 comparisons were made between two major fishing areas targeting spawning aggregations and a marine protected area without fishing. The outcome was inconclusive and is discussed with respect to potential fishery induced effects, effects of the salinity gradient, effects of specific year-classes, and effects of maturation status of sampled fish.

  2. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  3. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  4. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  5. Weights, growth, and survival of timber wolf pups in Minnesota

    USGS Publications Warehouse

    Van Ballenberghe, V.; Mech, L.D.

    1975-01-01

    Weights, growth rates, canine tooth lengths, and survival data were obtained from 73 wild wolf (Canis lupus) pups that were 8 to 28 weeks old when live-trapped in three areas of northern Minnesota from 1969 to 1972. Relative weights of wild pups are expressed as percentages of a standard weight curve based on data from captive pups of similar age. These relative weights varied greatly within litters, between litters, and between years; extremes of 31 to 144 percent of the standard were observed. Growth rates ranging from 0.05 to 0.23 kilograms per day were observed, and similar variations in general devel pment and in replacement and growth of canine teeth were noted. Survival data based on radio-tracking and tag returns indicated that pups with relative weights less than 65 percent of standard have a poor chance of survival, whereas pups of at least 80 percent of standard weight have a high survivability. Pups born in 1972 were especially underweight, probably a result of declining white-tailed deer (Odocoileus virginianus) densities in the interior of the Superior National Forest study area.

  6. Design, fabrication and characterization of oxidized alginate-gelatin hydrogels for muscle tissue engineering applications.

    PubMed

    Baniasadi, Hossein; Mashayekhan, Shohreh; Fadaoddini, Samira; Haghirsharifzamini, Yasamin

    2016-07-01

    In this study, we reported the preparation of self cross-linked oxidized alginate-gelatin hydrogels for muscle tissue engineering. The effect of oxidation degree (OD) and oxidized alginate/gelatin (OA/GEL) weight ratio were examined and the results showed that in the constant OA/GEL weight ratio, both cross-linking density and Young's modulus enhanced by increasing OD due to increment of aldehyde groups. Furthermore, the degradation rate was increased with increasing OD probably due to decrement in alginate molecular weight during oxidation reaction facilitated degradation of alginate chains. MTT cytotoxicity assays performed on Wharton's Jelly-derived umbilical cord mesenchymal stem cells cultured on hydrogels with OD of 30% showed that the highest rate of cell proliferation belong to hydrogel with OA/GEL weight ratio of 30/70. Overall, it can be concluded from all obtained results that the prepared hydrogel with OA/GEL weight ratio and OD of 30/70 and 30%, respectively, could be proper candidate for use in muscle tissue engineering. © The Author(s) 2016.

  7. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  8. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  9. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  10. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  11. Generalized fish life-cycle poplulation model and computer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexuallymore » mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.« less

  12. On the Hardness of Subset Sum Problem from Different Intervals

    NASA Astrophysics Data System (ADS)

    Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  13. Dietary macronutrients and food consumption as determinants of long-term weight change in adult populations: a systematic literature review

    PubMed Central

    Fogelholm, Mikael; Anderssen, Sigmund; Gunnarsdottir, Ingibjörg; Lahti-Koski, Marjaana

    2012-01-01

    This systematic literature review examined the role of dietary macronutrient composition, food consumption and dietary patterns in predicting weight or waist circumference (WC) change, with and without prior weight reduction. The literature search covered year 2000 and onwards. Prospective cohort studies, case–control studies and interventions were included. The studies had adult (18–70 y), mostly Caucasian participants. Out of a total of 1,517 abstracts, 119 full papers were identified as potentially relevant. After a careful scrutiny, 50 papers were quality graded as A (highest), B or C. Forty-three papers with grading A or B were included in evidence grading, which was done separately for all exposure-outcome combinations. The grade of evidence was classified as convincing, probable, suggestive or no conclusion. We found probable evidence for high intake of dietary fibre and nuts predicting less weight gain, and for high intake of meat in predicting more weight gain. Suggestive evidence was found for a protective role against increasing weight from whole grains, cereal fibre, high-fat dairy products and high scores in an index describing a prudent dietary pattern. Likewise, there was suggestive evidence for both fibre and fruit intake in protection against larger increases in WC. Also suggestive evidence was found for high intake of refined grains, and sweets and desserts in predicting more weight gain, and for refined (white) bread and high energy density in predicting larger increases in WC. The results suggested that the proportion of macronutrients in the diet was not important in predicting changes in weight or WC. In contrast, plenty of fibre-rich foods and dairy products, and less refined grains, meat and sugar-rich foods and drinks were associated with less weight gain in prospective cohort studies. The results on the role of dietary macronutrient composition in prevention of weight regain (after prior weight loss) were inconclusive. PMID:22893781

  14. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  15. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  16. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  17. A summary of transition probabilities for atomic absorption lines formed in low-density clouds

    NASA Technical Reports Server (NTRS)

    Morton, D. C.; Smith, W. H.

    1973-01-01

    A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.

  18. Comparison of the excretion of sodium and meglumine diatrizoate at urography with simulated compression: an experimental study in the rat.

    PubMed

    Owman, T

    1981-07-01

    In the experimental model in the rabbit the excretion of sodium and meglumine diatrizoate, respectively, have been compared. Urographic density which was estimated through renal pelvic volume as calculated according to previous experiments (Owman 1978; Owman & Olin 1980) and urinary iodine concentration, is suggested to be more accurate than mere determination of urine iodine concentration and diuresis when evaluating and comparing urographic contrast media experimentally. More reliable dose optima are probably found when calculating density rather than determining urine concentrations. Of the examined media in this investigation, the sodium salt of diatrizoate was not superior to the meglumine salt in dose ranges up to 320 mg I/kg body weight, while at higher doses sodium diatrizoate gave higher urinary iodine concentrations and higher estimated density.

  19. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  20. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  1. Agricultural pesticide use in California: pesticide prioritization, use densities, and population distributions for a childhood cancer study.

    PubMed Central

    Gunier, R B; Harnly, M E; Reynolds, P; Hertz, A; Von Behren, J

    2001-01-01

    Several studies have suggested an association between childhood cancer and pesticide exposure. California leads the nation in agricultural pesticide use. A mandatory reporting system for all agricultural pesticide use in the state provides information on the active ingredient, amount used, and location. We calculated pesticide use density to quantify agricultural pesticide use in California block groups for a childhood cancer study. Pesticides with similar toxicologic properties (probable carcinogens, possible carcinogens, genotoxic compounds, and developmental or reproductive toxicants) were grouped together for this analysis. To prioritize pesticides, we weighted pesticide use by the carcinogenic and exposure potential of each compound. The top-ranking individual pesticides were propargite, methyl bromide, and trifluralin. We used a geographic information system to calculate pesticide use density in pounds per square mile of total land area for all United States census-block groups in the state. Most block groups (77%) averaged less than 1 pound per square mile of use for 1991-1994 for pesticides classified as probable human carcinogens. However, at the high end of use density (> 90th percentile), there were 493 block groups with more than 569 pounds per square mile. Approximately 170,000 children under 15 years of age were living in these block groups in 1990. The distribution of agricultural pesticide use and number of potentially exposed children suggests that pesticide use density would be of value for a study of childhood cancer. PMID:11689348

  2. Spatial and Temporal Analysis of Eruption Locations, Compositions, and Styles in Northern Harrat Rahat, Kingdom of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.

    2017-12-01

    Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.

  3. Music-evoked incidental happiness modulates probability weighting during risky lottery choices

    PubMed Central

    Schulreich, Stefan; Heussen, Yana G.; Gerhardt, Holger; Mohr, Peter N. C.; Binkofski, Ferdinand C.; Koelsch, Stefan; Heekeren, Hauke R.

    2014-01-01

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music—happy, sad, or no music, or sequences of random tones—and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the “happy” than in the “sad” and “random tones” conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the “happy” condition, participants showed significantly higher decision weights associated with the larger payoffs than in the “sad” and “random tones” conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting. PMID:24432007

  4. Music-evoked incidental happiness modulates probability weighting during risky lottery choices.

    PubMed

    Schulreich, Stefan; Heussen, Yana G; Gerhardt, Holger; Mohr, Peter N C; Binkofski, Ferdinand C; Koelsch, Stefan; Heekeren, Hauke R

    2014-01-07

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music-happy, sad, or no music, or sequences of random tones-and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the "happy" than in the "sad" and "random tones" conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the "happy" condition, participants showed significantly higher decision weights associated with the larger payoffs than in the "sad" and "random tones" conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting.

  5. Statistical methods for incomplete data: Some results on model misspecification.

    PubMed

    McIsaac, Michael; Cook, R J

    2017-02-01

    Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.

  6. Estimating the risk of Amazonian forest dieback.

    PubMed

    Rammig, Anja; Jupp, Tim; Thonicke, Kirsten; Tietjen, Britta; Heinke, Jens; Ostberg, Sebastian; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter

    2010-08-01

    *Climate change will very likely affect most forests in Amazonia during the course of the 21st century, but the direction and intensity of the change are uncertain, in part because of differences in rainfall projections. In order to constrain this uncertainty, we estimate the probability for biomass change in Amazonia on the basis of rainfall projections that are weighted by climate model performance for current conditions. *We estimate the risk of forest dieback by using weighted rainfall projections from 24 general circulation models (GCMs) to create probability density functions (PDFs) for future forest biomass changes simulated by a dynamic vegetation model (LPJmL). *Our probabilistic assessment of biomass change suggests a likely shift towards increasing biomass compared with nonweighted results. Biomass estimates range between a gain of 6.2 and a loss of 2.7 kg carbon m(-2) for the Amazon region, depending on the strength of CO(2) fertilization. *The uncertainty associated with the long-term effect of CO(2) is much larger than that associated with precipitation change. This underlines the importance of reducing uncertainties in the direct effects of CO(2) on tropical ecosystems.

  7. Density, aggregation, and body size of northern pikeminnow preying on juvenile salmonids in a large river

    USGS Publications Warehouse

    Petersen, J.H.

    2001-01-01

    Predation by northern pikeminnow Ptychocheilus oregonensis on juvenile salmonids Oncorhynchus spp. occurred probably during brief feeding bouts since diets were either dominated by salmonids (>80% by weight), or contained other prey types and few salmonids (<5%). In samples where salmonids had been consumed, large rather than small predators were more likely to have captured salmonids. Transects with higher catch-per-unit of effort of predators also had higher incidences of salmonids in predator guts. Predators in two of three reservoir areas were distributed more contagiously if they had preyed recently on salmonids. Spatial and temporal patchiness of salmonid prey may be generating differences in local density, aggregation, and body size of their predators in this large river.

  8. Density functional study for crystalline structures and electronic properties of Si1- x Sn x binary alloys

    NASA Astrophysics Data System (ADS)

    Nagae, Yuki; Kurosawa, Masashi; Shibayama, Shigehisa; Araidai, Masaaki; Sakashita, Mitsuo; Nakatsuka, Osamu; Shiraishi, Kenji; Zaima, Shigeaki

    2016-08-01

    We have carried out density functional theory (DFT) calculation for Si1- x Sn x alloy and investigated the effect of the displacement of Si and Sn atoms with strain relaxation on the lattice constant and E- k dispersion. We calculated the formation probabilities for all atomic configurations of Si1- x Sn x according to the Boltzmann distribution. The average lattice constant and E- k dispersion were weighted by the formation probability of each configuration of Si1- x Sn x . We estimated the displacement of Si and Sn atoms from the initial tetrahedral site in the Si1- x Sn x unit cell considering structural relaxation under hydrostatic pressure, and we found that the breaking of the degenerated electronic levels of the valence band edge could be caused by the breaking of the tetrahedral symmetry. We also calculated the E- k dispersion of the Si1- x Sn x alloy by the DFT+U method and found that a Sn content above 50% would be required for the indirect-direct transition.

  9. Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making.

    PubMed

    Ojala, Karita E; Janssen, Lieneke K; Hashemi, Mahur M; Timmer, Monique H M; Geurts, Dirk E M; Ter Huurne, Niels P; Cools, Roshan; Sescousse, Guillaume

    2018-01-01

    Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls ( n = 21) and pathological gamblers ( n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D 2 /D 3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D 2 /D 3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making.

  10. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  11. A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.

    2003-01-01

    The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.

  12. Simulations of Turbulent Momentum and Scalar Transport in Non-Reacting Confined Swirling Coaxial Jets

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey; Moder, Jeffrey P.

    2015-01-01

    This paper presents the numerical simulations of confined three-dimensional coaxial water jets. The objectives are to validate the newly proposed nonlinear turbulence models of momentum and scalar transport, and to evaluate the newly introduced scalar APDF and DWFDF equation along with its Eulerian implementation in the National Combustion Code (NCC). Simulations conducted include the steady RANS, the unsteady RANS (URANS), and the time-filtered Navier-Stokes (TFNS); both without and with invoking the APDF or DWFDF equation. When the APDF (ensemble averaged probability density function) or DWFDF (density weighted filtered density function) equation is invoked, the simulations are of a hybrid nature, i.e., the transport equations of energy and species are replaced by the APDF or DWFDF equation. Results of simulations are compared with the available experimental data. Some positive impacts of the nonlinear turbulence models and the Eulerian scalar APDF and DWFDF approach are observed.

  13. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conn, A. R.; Parker, Q. A.; Zucker, D. B.

    In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less

  15. Effects of increased collagen-matrix density on the mechanical properties and in vivo absorbability of hydroxyapatite-collagen composites as artificial bone materials.

    PubMed

    Yunoki, Shunji; Sugiura, Hiroaki; Ikoma, Toshiyuki; Kondo, Eiji; Yasuda, Kazunori; Tanaka, Junzo

    2011-02-01

    The aim of this study was to evaluate the effects of increased collagen-matrix density on the mechanical properties and in vivo absorbability of porous hydroxyapatite (HAp)-collagen composites as artificial bone materials. Seven types of porous HAp-collagen composites were prepared from HAp nanocrystals and dense collagen fibrils. Their densities and HAp/collagen weight ratios ranged from 122 to 331 mg cm⁻³ and from 20/80 to 80/20, respectively. The flexural modulus and strength increased with an increase in density, reaching 2.46 ± 0.48 and 0.651 ± 0.103 MPa, respectively. The porous composites with a higher collagen-matrix density exhibited much higher mechanical properties at the same densities, suggesting that increasing the collagen-matrix density is an effective way of improving the mechanical properties. It was also suggested that other structural factors in addition to collagen-matrix density are required to achieve bone-like mechanical properties. The in vivo absorbability of the composites was investigated in bone defects of rabbit femurs, demonstrating that the absorption rate decreased with increases in the composite density. An exhaustive increase in density is probably limited by decreases in absorbability as artificial bones.

  16. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  17. Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making

    PubMed Central

    Timmer, Monique H. M.; ter Huurne, Niels P.

    2018-01-01

    Abstract Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls (n = 21) and pathological gamblers (n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D2/D3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D2/D3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making. PMID:29632870

  18. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  19. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less

  20. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  1. Resampling probability values for weighted kappa with multiple raters.

    PubMed

    Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

    2008-04-01

    A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.

  2. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    ERIC Educational Resources Information Center

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  3. Tracking Multiple Video Targets with an Improved GM-PHD Tracker

    PubMed Central

    Zhou, Xiaolong; Yu, Hui; Liu, Honghai; Li, Youfu

    2015-01-01

    Tracking multiple moving targets from a video plays an important role in many vision-based robotic applications. In this paper, we propose an improved Gaussian mixture probability hypothesis density (GM-PHD) tracker with weight penalization to effectively and accurately track multiple moving targets from a video. First, an entropy-based birth intensity estimation method is incorporated to eliminate the false positives caused by noisy video data. Then, a weight-penalized method with multi-feature fusion is proposed to accurately track the targets in close movement. For targets without occlusion, a weight matrix that contains all updated weights between the predicted target states and the measurements is constructed, and a simple, but effective method based on total weight and predicted target state is proposed to search the ambiguous weights in the weight matrix. The ambiguous weights are then penalized according to the fused target features that include spatial-colour appearance, histogram of oriented gradient and target area and further re-normalized to form a new weight matrix. With this new weight matrix, the tracker can correctly track the targets in close movement without occlusion. For targets with occlusion, a robust game-theoretical method is used. Finally, the experiments conducted on various video scenarios validate the effectiveness of the proposed penalization method and show the superior performance of our tracker over the state of the art. PMID:26633422

  4. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  5. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  6. Density Measurement System for Weights of 1 kg to 20 kg Using Hydrostatic Weighing

    NASA Astrophysics Data System (ADS)

    Lee, Yong Jae; Lee, Woo Gab; Abdurahman, Mohammed; Kim, Kwang Pyo

    This paper presents a density measurement system to determine density of weights from 1 kg to 20 kg using hydrostatic weighing. The system works based on Archimedes principle. The density of reference liquid is determined using this setup while determining the density of the test weight. Density sphere is used as standard density ball to determine density of the reference liquid. A new immersion pan is designed for dual purpose to carry the density sphere and the cylindrical test weight for weighing in liquid. Main parts of the setup are an electronic balance, a thermostat controlled liquid bath, reference weights designed for bottom weighing, dual purpose immersion pans and stepping motors to load and unload in weighing process. The results of density measurement will be evaluated as uncertainties for weights of 1 kg to 20 kg.

  7. The structure and statistics of interstellar turbulence

    NASA Astrophysics Data System (ADS)

    Kritsuk, A. G.; Ustyugov, S. D.; Norman, M. L.

    2017-06-01

    We explore the structure and statistics of multiphase, magnetized ISM turbulence in the local Milky Way by means of driven periodic box numerical MHD simulations. Using the higher order-accurate piecewise-parabolic method on a local stencil (PPML), we carry out a small parameter survey varying the mean magnetic field strength and density while fixing the rms velocity to observed values. We quantify numerous characteristics of the transient and steady-state turbulence, including its thermodynamics and phase structure, kinetic and magnetic energy power spectra, structure functions, and distribution functions of density, column density, pressure, and magnetic field strength. The simulations reproduce many observables of the local ISM, including molecular clouds, such as the ratio of turbulent to mean magnetic field at 100 pc scale, the mass and volume fractions of thermally stable Hi, the lognormal distribution of column densities, the mass-weighted distribution of thermal pressure, and the linewidth-size relationship for molecular clouds. Our models predict the shape of magnetic field probability density functions (PDFs), which are strongly non-Gaussian, and the relative alignment of magnetic field and density structures. Finally, our models show how the observed low rates of star formation per free-fall time are controlled by the multiphase thermodynamics and large-scale turbulence.

  8. Nitrogen oxide emission calculation for post-Panamax container ships by using engine operation power probability as weighting factor: A slow-steaming case.

    PubMed

    Cheng, Chih-Wen; Hua, Jian; Hwang, Daw-Shang

    2018-06-01

    In this study, the nitrogen oxide (NO x ) emission factors and total NO x emissions of two groups of post-Panamax container ships operating on a long-term slow-steaming basis along Euro-Asian routes were calculated using both the probability density function of engine power levels and the NO x emission function. The main engines of the five sister ships in Group I satisfied the Tier I emission limit stipulated in MARPOL (International Convention for the Prevention of Pollution from Ships) Annex VI, and those in Group II satisfied the Tier II limit. The calculated NO x emission factors of the Group I and Group II ships were 14.73 and 17.85 g/kWhr, respectively. The total NO x emissions of the Group II ships were determined to be 4.4% greater than those of the Group I ships. When the Tier II certification value was used to calculate the average total NO x emissions of Group II engines, the result was lower than the actual value by 21.9%. Although fuel consumption and carbon dioxide (CO 2 ) emissions were increased by 1.76% because of slow steaming, the NO x emissions were markedly reduced by 17.2%. The proposed method is more effective and accurate than the NO x Technical Code 2008. Furthermore, it can be more appropriately applied to determine the NO x emissions of international shipping inventory. The usage of operating power probability density function of diesel engines as the weighting factor and the NO x emission function obtained from test bed for calculating NO x emissions is more accurate and practical. The proposed method is suitable for all types and purposes of diesel engines, irrespective of their operating power level. The method can be used to effectively determine the NO x emissions of international shipping and inventory applications and should be considered in determining the carbon tax to be imposed in the future.

  9. Lost in search: (Mal-)adaptation to probabilistic decision environments in children and adults.

    PubMed

    Betsch, Tilmann; Lehmann, Anne; Lindow, Stefanie; Lang, Anna; Schoemann, Martin

    2016-02-01

    Adaptive decision making in probabilistic environments requires individuals to use probabilities as weights in predecisional information searches and/or when making subsequent choices. Within a child-friendly computerized environment (Mousekids), we tracked 205 children's (105 children 5-6 years of age and 100 children 9-10 years of age) and 103 adults' (age range: 21-22 years) search behaviors and decisions under different probability dispersions (.17; .33, .83 vs. .50, .67, .83) and constraint conditions (instructions to limit search: yes vs. no). All age groups limited their depth of search when instructed to do so and when probability dispersion was high (range: .17-.83). Unlike adults, children failed to use probabilities as weights for their searches, which were largely not systematic. When examining choices, however, elementary school children (unlike preschoolers) systematically used probabilities as weights in their decisions. This suggests that an intuitive understanding of probabilities and the capacity to use them as weights during integration is not a sufficient condition for applying simple selective search strategies that place one's focus on weight distributions. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  10. Vegetable oil fortified feeds in the nutrition of very low birthweight babies.

    PubMed

    Vaidya, U V; Hegde, V M; Bhave, S A; Pandit, A N

    1992-12-01

    Two kinds of oils (i) Polyunsaturated fatty acids (PUFA) rich Safflower oil, and (ii) Medium chain triglyceride (MCT) rich Coconut oil were added to the feeds of 46 very low birthweight (VLBW) babies to see if such a supplementation is capable of enhancing their weight gain. Twenty two well matched babies who received no fortification served as controls. The oil fortification raised the energy density of the feeds from approximately 67 kcal/dl to 79 kcal/dl. Feed volumes were restricted to a maximum of 200 ml/kg/day. The mean weight gain was highest and significantly higher than the controls in the Coconut oil group (19.47 +/- 8.67 g/day or 13.91 g/day). Increase in the triceps skinfold thickness and serum triglycerides were also correspondingly higher in this group. The lead in the weight gain in this group continued in the follow up period (corrected age 3 months). As against this, higher weight gain in Safflower oil group (13.26 +/- 6.58 g/day) as compared to the controls (11.59 +/- 5.33 g/day), failed to reach statistically significant proportions, probably because of increased statistically significant proportions, probably because of increased steatorrhea (stool fat 4+ in 50% of the samples tested). The differences in the two oil groups are presumably because of better absorption of MCT rich coconut oil. However, individual variations in weight gain amongst the babies were wide so that some control babies had higher growth rates than oil fortified ones. The technique of oil fortification is fraught with dangers of intolerance, contamination and aspiration. Long term effects of such supplementation are largely unknown.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Doppler Temperature Coefficient Calculations Using Adjoint-Weighted Tallies and Continuous Energy Cross Sections in MCNP6

    NASA Astrophysics Data System (ADS)

    Gonzales, Matthew Alejandro

    The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.

  12. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  13. Bone mineral density and nutritional status in children with quadriplegic cerebral palsy.

    PubMed

    Alvarez Zaragoza, Citlalli; Vasquez Garibay, Edgar Manuel; García Contreras, Andrea A; Larrosa Haro, Alfredo; Romero Velarde, Enrique; Rea Rosas, Alejandro; Cabrales de Anda, José Luis; Vega Olea, Israel

    2018-03-04

    This study demonstrated the relationship of low bone mineral density (BMD) with the degree of motor impairment, method of feeding, anthropometric indicators, and malnutrition in children with quadriplegic cerebral palsy (CP). The control of these factors could optimize adequate bone mineralization, avoid the risk of osteoporosis, and would improve the quality of life. The purpose of the study is to explore the relationship between low BMD and nutritional status in children with quadriplegic CP. A cross-sectional analytical study included 59 participants aged 6 to 18 years with quadriplegic CP. Weight and height were obtained with alternative measurements, and weight/age, height/age, and BMI/age indexes were estimated. The BMD measurement obtained from the lumbar spine was expressed in grams per square centimeter and Z score (Z). Unpaired Student's t tests, chi-square tests, odds ratios, Pearson's correlations, and linear regressions were performed. The mean of BMD Z score was lower in adolescents than in school-aged children (p = 0.002). Patients with low BMD were at the most affected levels of the Gross Motor Function Classification System (GMFCS). Participants at level V of the GMFCS were more likely to have low BMD than levels III and IV [odds ratio (OR) = 5.8 (confidence interval [CI] 95% 1.4, 24.8), p = 0.010]. There was a higher probability of low BMD in tube-feeding patients [OR = 8.6 (CI 95% 1.0, 73.4), p = 0.023]. The probability of low BMD was higher in malnourished children with weight/age and BMI indices [OR = 11.4 (1.3, 94), p = 0.009] and [OR = 9.4 (CI 95% 1.1, 79.7), p = 0.017], respectively. There was a significant relationship between low BMD, degree of motor impairment, method of feeding, and malnutrition. Optimizing these factors could reduce the risk of osteopenia and osteoporosis and attain a significant improvement of quality of life in children with quadriplegic CP.

  14. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  15. Many-body calculations of low energy eigenstates in magnetic and periodic systems with self healing diffusion Monte Carlo: steps beyond the fixed-phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less

  16. Neighbourhood walkability, road density and socio-economic status in Sydney, Australia.

    PubMed

    Cowie, Christine T; Ding, Ding; Rolfe, Margaret I; Mayne, Darren J; Jalaludin, Bin; Bauman, Adrian; Morgan, Geoffrey G

    2016-04-27

    Planning and transport agencies play a vital role in influencing the design of townscapes, travel modes and travel behaviors, which in turn impact on the walkability of neighbourhoods and residents' physical activity opportunities. Optimising neighbourhood walkability is desirable in built environments, however, the population health benefits of walkability may be offset by increased exposure to traffic related air pollution. This paper describes the spatial distribution of neighbourhood walkability and weighted road density, a marker for traffic related air pollution, in Sydney, Australia. As exposure to air pollution is related to socio-economic status in some cities, this paper also examines the spatial distribution of weighted road density and walkability by socio-economic status (SES). We calculated walkability, weighted road density (as a measure of traffic related air pollution) and SES, using predefined and validated measures, for 5858 Sydney neighbourhoods, representing 3.6 million population. We overlaid tertiles of walkability and weighted road density to define "sweet-spots" (high walkability-low weighted road density), and "sour- spots" (low walkability-high weighted road density) neighbourhoods. We also examined the distribution of walkability and weighted road density by SES quintiles. Walkability and weighted road density showed a clear east-west gradient across the region. Our study found that only 4 % of Sydney's population lived in sweet-spot" neighbourhoods with high walkability and low weighted road density (desirable), and these tended to be located closer to the city centre. A greater proportion of neighbourhoods had health limiting attributes of high weighted road density or low walkability (about 20 % each), and over 5 % of the population lived in "sour-spot" neighbourhoods with low walkability and high weighted road density (least desirable). These neighbourhoods were more distant from the city centre and scattered more widely. There were no linear trends between walkability/weighted road density and neighbourhood SES. Our walkability and weighted road density maps and associated analyses by SES can help identify neighbourhoods with inequalities in health-promoting or health-limiting environments. Planning agencies should seek out opportunities for increased neighbourhood walkability through improved urban development and transport planning, which simultaneously minimizes exposure to traffic related air pollution.

  17. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  18. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  19. Numeracy moderates the influence of task-irrelevant affect on probability weighting.

    PubMed

    Traczyk, Jakub; Fulawka, Kamil

    2016-06-01

    Statistical numeracy, defined as the ability to understand and process statistical and probability information, plays a significant role in superior decision making. However, recent research has demonstrated that statistical numeracy goes beyond simple comprehension of numbers and mathematical operations. On the contrary to previous studies that were focused on emotions integral to risky prospects, we hypothesized that highly numerate individuals would exhibit more linear probability weighting because they would be less biased by incidental and decision-irrelevant affect. Participants were instructed to make a series of insurance decisions preceded by negative (i.e., fear-inducing) or neutral stimuli. We found that incidental negative affect increased the curvature of the probability weighting function (PWF). Interestingly, this effect was significant only for less numerate individuals, while probability weighting in more numerate people was not altered by decision-irrelevant affect. We propose two candidate mechanisms for the observed effect. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Materials separation by dielectrophoresis

    NASA Technical Reports Server (NTRS)

    Sagar, A. D.; Rose, R. M.

    1988-01-01

    The feasibility of vacuum dielectrophoresis as a method for particulate materials separation in a microgravity environment was investigated. Particle separations were performed in a specially constructed miniature drop-tower with a residence time of about 0.3 sec. Particle motion in such a system is independent of size and based only on density and dielectric constant, for a given electric field. The observed separations and deflections exceeded the theoretical predictions, probably due to multiparticle effects. In any case, this approach should work well in microgravity for many classes of materials, with relatively simple apparatus and low weight and power requirements.

  1. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  2. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  3. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  4. PFOS induced lipid metabolism disturbances in BALB/c mice through inhibition of low density lipoproteins excretion

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Wang, Yu; Liang, Yong; Li, Jia; Liu, Yuchen; Zhang, Jie; Zhang, Aiqian; Fu, Jianjie; Jiang, Guibin

    2014-04-01

    Male BALB/c mice fed with either a regular or high fat diet were exposed to 0, 5 or 20 mg/kg perfluorooctane sulfonate (PFOS) for 14 days. Increased body weight, serum glucose, cholesterol and lipoprotein levels were observed in mice given a high fat diet. However, all PFOS-treated mice got reduced levels of serum lipid and lipoprotein. Decreasing liver glycogen content was also observed, accompanied by reduced serum glucose levels. Histological and ultrastructural examination detected more lipid droplets accumulated in hepatocytes after PFOS exposure. Moreover, transcripitonal activity of lipid metabolism related genes suggests that PFOS toxicity is probably unrelevant to PPARα's transcription. The present study demonstrates a lipid disturbance caused by PFOS and thus point to its role in inhibiting the secretion and normal function of low density lipoproteins.

  5. Sensitivity of feedforward neural networks to weight errors

    NASA Technical Reports Server (NTRS)

    Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney

    1990-01-01

    An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  6. Weight loss and bone mineral density.

    PubMed

    Hunter, Gary R; Plaisance, Eric P; Fisher, Gordon

    2014-10-01

    Despite evidence that energy deficit produces multiple physiological and metabolic benefits, clinicians are often reluctant to prescribe weight loss in older individuals or those with low bone mineral density (BMD), fearing BMD will be decreased. Confusion exists concerning the effects that weight loss has on bone health. Bone density is more closely associated with lean mass than total body mass and fat mass. Although rapid or large weight loss is often associated with loss of bone density, slower or smaller weight loss is much less apt to adversely affect BMD, especially when it is accompanied with high intensity resistance and/or impact loading training. Maintenance of calcium and vitamin D intake seems to positively affect BMD during weight loss. Although dual energy X-ray absorptiometry is normally used to evaluate bone density, it may overestimate BMD loss following massive weight loss. Volumetric quantitative computed tomography may be more accurate for tracking bone density changes following large weight loss. Moderate weight loss does not necessarily compromise bone health, especially when exercise training is involved. Training strategies that include heavy resistance training and high impact loading that occur with jump training may be especially productive in maintaining, or even increasing bone density with weight loss.

  7. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  8. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  9. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  10. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  12. Low-energy isovector and isoscalar dipole response in neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Vretenar, D.; Niu, Y. F.; Paar, N.; Meng, J.

    2012-04-01

    The self-consistent random-phase approximation, based on the framework of relativistic energy density functionals, is employed in the study of isovector and isoscalar dipole response in 68Ni,132Sn, and 208Pb. The evolution of pygmy dipole states (PDSs) in the region of low excitation energies is analyzed as a function of the density dependence of the symmetry energy for a set of relativistic effective interactions. The occurrence of PDSs is predicted in the response to both the isovector and the isoscalar dipole operators, and its strength is enhanced with the increase in the symmetry energy at saturation and the slope of the symmetry energy. In both channels, the PDS exhausts a relatively small fraction of the energy-weighted sum rule but a much larger percentage of the inverse energy-weighted sum rule. For the isovector dipole operator, the reduced transition probability B(E1) of the PDSs is generally small because of pronounced cancellation of neutron and proton partial contributions. The isoscalar-reduced transition amplitude is predominantly determined by neutron particle-hole configurations, most of which add coherently, and this results in a collective response of the PDSs to the isoscalar dipole operator.

  13. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  14. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  15. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  16. Artificial neural networks for the diagnosis of aggressive periodontitis trained by immunologic parameters.

    PubMed

    Papantonopoulos, Georgios; Takahashi, Keiso; Bountis, Tasos; Loos, Bruno G

    2014-01-01

    There is neither a single clinical, microbiological, histopathological or genetic test, nor combinations of them, to discriminate aggressive periodontitis (AgP) from chronic periodontitis (CP) patients. We aimed to estimate probability density functions of clinical and immunologic datasets derived from periodontitis patients and construct artificial neural networks (ANNs) to correctly classify patients into AgP or CP class. The fit of probability distributions on the datasets was tested by the Akaike information criterion (AIC). ANNs were trained by cross entropy (CE) values estimated between probabilities of showing certain levels of immunologic parameters and a reference mode probability proposed by kernel density estimation (KDE). The weight decay regularization parameter of the ANNs was determined by 10-fold cross-validation. Possible evidence for 2 clusters of patients on cross-sectional and longitudinal bone loss measurements were revealed by KDE. Two to 7 clusters were shown on datasets of CD4/CD8 ratio, CD3, monocyte, eosinophil, neutrophil and lymphocyte counts, IL-1, IL-2, IL-4, INF-γ and TNF-α level from monocytes, antibody levels against A. actinomycetemcomitans (A.a.) and P.gingivalis (P.g.). ANNs gave 90%-98% accuracy in classifying patients into either AgP or CP. The best overall prediction was given by an ANN with CE of monocyte, eosinophil, neutrophil counts and CD4/CD8 ratio as inputs. ANNs can be powerful in classifying periodontitis patients into AgP or CP, when fed by CE values based on KDE. Therefore ANNs can be employed for accurate diagnosis of AgP or CP by using relatively simple and conveniently obtained parameters, like leukocyte counts in peripheral blood. This will allow clinicians to better adapt specific treatment protocols for their AgP and CP patients.

  17. [Is there a relation between weight in rats, bone density, ash weight and histomorphometric indicators of trabecular volume and thickness in the bones of extremities?].

    PubMed

    Zák, J; Kapitola, J; Povýsil, C

    2003-01-01

    Authors deal with question, if there is possibility to infer bone histological structure (described by histomorphometric parameters of trabecular bone volume and trabecular thickness) from bone density, ash weight or even from weight of animal (rat). Both tibias of each of 30 intact male rats, 90 days old, were processed. Left tibia was utilized to the determination of histomorphometric parameters of undecalcified bone tissue patterns by automatic image analysis. Right tibia was used to the determination of values of bone density, using Archimedes' principle. Values of bone density, ash weight, ash weight related to bone volume and animal weight were correlated with histomorphometric parameters (trabecular bone volume, trabecular thickness) by Pearson's correlation test. One could presume the existence of relation between data, describing bone mass at the histological level (trabecular bone of tibia) and other data, describing mass of whole bone or even animal mass (weight). But no statistically significant correlation was found. The reason of the present results could be in the deviations of trabecular density in marrow of tibia. Because of higher trabecular bone density in metaphyseal and epiphyseal regions, the histomorphometric analysis of trabecular bone is preferentially done in these areas. It is possible, that this irregularity of trabecular tibial density could be the source of the deviations, which could influence the results of correlations determined. The values of bone density, ash weight and animal weight do not influence trabecular bone volume and vice versa: static histomorphometric parameters of trabecular bone do not reflect bone density, ash weight and weight of animal.

  18. Understanding redshift space distortions in density-weighted peculiar velocity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, Naonori S.; Okumura, Teppei; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: teppei.oku@gmail.com, E-mail: dns@astro.princeton.edu

    2016-07-01

    Observations of the kinetic Sunyaev-Zel'dovich (kSZ) effect measure the density-weighted velocity field, a potentially powerful cosmological probe. This paper presents an analytical method to predict the power spectrum and two-point correlation function of the density-weighted velocity in redshift space, the direct observables in kSZ surveys. We show a simple relation between the density power spectrum and the density-weighted velocity power spectrum that holds for both dark matter and halos. Using this relation, we can then extend familiar perturbation expansion techniques to the kSZ power spectrum. One of the most important features of density-weighted velocity statistics in redshift space is themore » change in sign of the cross-correlation between the density and density-weighted velocity at mildly small scales due to nonlinear redshift space distortions. Our model can explain this characteristic feature without any free parameters. As a result, our results can precisely predict the non-linear behavior of the density-weighted velocity field in redshift space up to ∼ 30 h {sup -1} Mpc for dark matter particles at the redshifts of z =0.0, 0.5, and 1.0.« less

  19. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  20. Chromosome Model reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-domains

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.; Ponomarev, Artem; Chen, James L.; Cucinotta, Francis A.; Barcellos-Hoff, Helen

    2007-01-01

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage is induced. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM and gammaH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by relative DNA image measurements. This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent in regions with lower density DNA than predicted. This deviation from random behavior was more pronounced within the first 5 min following irradiation for phosphorylated ATM RIF, while gammaH2AX and 53BP1 RIF showed very pronounced deviation up to 30 min after exposure. These data suggest the existence of repair centers in mammalian epithelial cells. These centers would be nuclear sub-domains where DNA lesions would be collected for more efficient repair.

  1. Elicitation of quantitative data from a heterogeneous expert panel: formal process and application in animal health.

    PubMed

    Van der Fels-Klerx, Ine H J; Goossens, Louis H J; Saatkamp, Helmut W; Horst, Suzan H S

    2002-02-01

    This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.

  2. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  3. Dynamic Density: An Air Traffic Management Metric

    NASA Technical Reports Server (NTRS)

    Laudeman, I. V.; Shelden, S. G.; Branstrom, R.; Brasil, C. L.

    1998-01-01

    The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.

  4. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  6. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  7. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  8. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences

    PubMed Central

    Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.

    2015-01-01

    The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637

  9. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  10. Sensor Drift Compensation Algorithm based on PDF Distance Minimization

    NASA Astrophysics Data System (ADS)

    Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo

    2009-05-01

    In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation

  11. Dietary Management of Obesity: Cornerstones of Healthy Eating Patterns.

    PubMed

    Smethers, Alissa D; Rolls, Barbara J

    2018-01-01

    Several dietary patterns, both macronutrient and food based, can lead to weight loss. A key strategy for weight management that can be applied across dietary patterns is to reduce energy density. Clinical trials show that reducing energy density is effective for weight loss and weight loss maintenance. A variety of practical strategies and tools can help facilitate successful weight management by reducing energy density, providing portion control, and improving diet quality. The flexibility of energy density gives patients options to tailor and personalize their dietary pattern to reduce energy intake for sustainable weight loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Probability of Vitamin D Deficiency by Body Weight and Race/Ethnicity.

    PubMed

    Weishaar, Tom; Rajan, Sonali; Keller, Bryan

    2016-01-01

    While most physicians recognize that vitamin D status varies by skin color because darker skin requires more light to synthesize vitamin D than lighter skin, the importance of body weight to vitamin D status is a newer, less recognized, finding. The purpose of this study was to use nationally representative US data to determine the probability of vitamin D deficiency by body weight and skin color. Using data for individuals age ≥6 years from the 2001 to 2010 cycles of the US National Health and Nutrition Examination Survey, we calculated the effect of skin color, body weight, and age on vitamin D status. We determined the probability of deficiency within the normal range of body weight for 3 race/ethnicity groups at 3 target levels of 25-hydroxyvitamin D. Darker skin colors and heavier body weights are independently and significantly associated with poorer vitamin D status. We report graphically the probability of vitamin D deficiency by body weight and skin color at vitamin D targets of 20 and 30 ng/mL. The effects of skin color and body weight on vitamin D status are large both statistically and clinically. Knowledge of these effects may facilitate diagnosis of vitamin D deficiency. © Copyright 2016 by the American Board of Family Medicine.

  13. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  14. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  15. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  16. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  17. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  18. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  19. Modelling interactions of toxicants and density dependence in wildlife populations

    USGS Publications Warehouse

    Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.

    2013-01-01

    1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to toxicant impacts until a critical threshold is crossed. In our study population, toxicant-induced changes were observed in the equilibrium number of nonbreeding rather than breeding birds, suggesting that monitoring efforts including both life stages are needed to timely detect population declines. Further, by combining quantitative exposure–response relationships with a wildlife demographic model, we provided a method to quantify critical toxicant thresholds for wildlife population persistence.

  20. Singular solution of the Feller diffusion equation via a spectral decomposition.

    PubMed

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  1. Singular solution of the Feller diffusion equation via a spectral decomposition

    NASA Astrophysics Data System (ADS)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  2. Dehydration of seabird prey during transport to the colony: Effects on wet weight energy densities

    USGS Publications Warehouse

    Montevecchi, W.A.; Piatt, John F.

    1987-01-01

    We present evidence to indicate that dehydration of prey transported by seabirds from capture sites at sea to chicks at colonies inflates estimates of wet weight energy densities. These findings and a comparison of wet and dry weight energy densities reported in the literature emphasize the importance of (i) accurate measurement of the fresh weight and water content of prey, (ii) use of dry weight energy densities in comparisons among species, seasons, and regions, and (iii) cautious interpretation and extrapolation of existing data sets.

  3. Convergence analyses on on-line weight noise injection-based training algorithms for MLPs.

    PubMed

    Sum, John; Leung, Chi-Sing; Ho, Kevin

    2012-11-01

    Injecting weight noise during training is a simple technique that has been proposed for almost two decades. However, little is known about its convergence behavior. This paper studies the convergence of two weight noise injection-based training algorithms, multiplicative weight noise injection with weight decay and additive weight noise injection with weight decay. We consider that they are applied to multilayer perceptrons either with linear or sigmoid output nodes. Let w(t) be the weight vector, let V(w) be the corresponding objective function of the training algorithm, let α >; 0 be the weight decay constant, and let μ(t) be the step size. We show that if μ(t)→ 0, then with probability one E[||w(t)||2(2)] is bound and lim(t) → ∞ ||w(t)||2 exists. Based on these two properties, we show that if μ(t)→ 0, Σtμ(t)=∞, and Σtμ(t)(2) <; ∞, then with probability one these algorithms converge. Moreover, w(t) converges with probability one to a point where ∇wV(w)=0.

  4. Space-based sensor management and geostationary satellites tracking

    NASA Astrophysics Data System (ADS)

    El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.

    2007-04-01

    Sensor management for space situational awareness presents a daunting theoretical and practical challenge as it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT) as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction (PWC) method for regularization in the implementation of the PHD-PF tracker.

  5. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  6. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  7. Risk-taking in disorders of natural and drug rewards: neural correlates and effects of probability, valence, and magnitude.

    PubMed

    Voon, Valerie; Morris, Laurel S; Irvine, Michael A; Ruck, Christian; Worbe, Yulia; Derbyshire, Katherine; Rankov, Vladan; Schreiber, Liana Rn; Odlaug, Brian L; Harrison, Neil A; Wood, Jonathan; Robbins, Trevor W; Bullmore, Edward T; Grant, Jon E

    2015-03-01

    Pathological behaviors toward drugs and food rewards have underlying commonalities. Risk-taking has a fourfold pattern varying as a function of probability and valence leading to the nonlinearity of probability weighting with overweighting of small probabilities and underweighting of large probabilities. Here we assess these influences on risk-taking in patients with pathological behaviors toward drug and food rewards and examine structural neural correlates of nonlinearity of probability weighting in healthy volunteers. In the anticipation of rewards, subjects with binge eating disorder show greater risk-taking, similar to substance-use disorders. Methamphetamine-dependent subjects had greater nonlinearity of probability weighting along with impaired subjective discrimination of probability and reward magnitude. Ex-smokers also had lower risk-taking to rewards compared with non-smokers. In the anticipation of losses, obesity without binge eating had a similar pattern to other substance-use disorders. Obese subjects with binge eating also have impaired discrimination of subjective value similar to that of the methamphetamine-dependent subjects. Nonlinearity of probability weighting was associated with lower gray matter volume in dorsolateral and ventromedial prefrontal cortex and orbitofrontal cortex in healthy volunteers. Our findings support a distinct subtype of binge eating disorder in obesity with similarities in risk-taking in the reward domain to substance use disorders. The results dovetail with the current approach of defining mechanistically based dimensional approaches rather than categorical approaches to psychiatric disorders. The relationship to risk probability and valence may underlie the propensity toward pathological behaviors toward different types of rewards.

  8. Risk-Taking in Disorders of Natural and Drug Rewards: Neural Correlates and Effects of Probability, Valence, and Magnitude

    PubMed Central

    Voon, Valerie; Morris, Laurel S; Irvine, Michael A; Ruck, Christian; Worbe, Yulia; Derbyshire, Katherine; Rankov, Vladan; Schreiber, Liana RN; Odlaug, Brian L; Harrison, Neil A; Wood, Jonathan; Robbins, Trevor W; Bullmore, Edward T; Grant, Jon E

    2015-01-01

    Pathological behaviors toward drugs and food rewards have underlying commonalities. Risk-taking has a fourfold pattern varying as a function of probability and valence leading to the nonlinearity of probability weighting with overweighting of small probabilities and underweighting of large probabilities. Here we assess these influences on risk-taking in patients with pathological behaviors toward drug and food rewards and examine structural neural correlates of nonlinearity of probability weighting in healthy volunteers. In the anticipation of rewards, subjects with binge eating disorder show greater risk-taking, similar to substance-use disorders. Methamphetamine-dependent subjects had greater nonlinearity of probability weighting along with impaired subjective discrimination of probability and reward magnitude. Ex-smokers also had lower risk-taking to rewards compared with non-smokers. In the anticipation of losses, obesity without binge eating had a similar pattern to other substance-use disorders. Obese subjects with binge eating also have impaired discrimination of subjective value similar to that of the methamphetamine-dependent subjects. Nonlinearity of probability weighting was associated with lower gray matter volume in dorsolateral and ventromedial prefrontal cortex and orbitofrontal cortex in healthy volunteers. Our findings support a distinct subtype of binge eating disorder in obesity with similarities in risk-taking in the reward domain to substance use disorders. The results dovetail with the current approach of defining mechanistically based dimensional approaches rather than categorical approaches to psychiatric disorders. The relationship to risk probability and valence may underlie the propensity toward pathological behaviors toward different types of rewards. PMID:25270821

  9. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  10. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  11. Determinants of bone density among athletes engaged in weight-bearing and non-weight-bearing activity

    NASA Technical Reports Server (NTRS)

    Block, Jon E.; Friedlander, Anne L.; Brooks, George A.; Steiger, Peter; Stubbs, Harrison A.

    1989-01-01

    The effect of weight bearing activity on the bone density was investigated in athletes by comparing the measures of bone density of athletes engaged in weight-training programs with those of polo players and nonexercising subjects. All subjects had measurements of spinal trabecular and integral bone density by quantitative tomography, as well as determinations of hip bone density by dual photon absorptiometry. Results confirmed previous findings by Block et al. (1987) of significantly greater bone density among highly trained athletes compared with nonexercising subjects of similar age. Results also indicated that athletes engaged in non-weight-bearing forms of rigorous exercise had greater levels of bone density. However, as the participants in this study were exceptional athletes, engaged in a strenuous sport with both aerobic and heavy resistance components, a confirmation of these data is needed, using larger samples of individuals.

  12. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  13. Effect of mungbean (Vigna radiate) living mulch on density and dry weight of weeds in corn (Zea mays) field.

    PubMed

    Moghadam, M Bakhtiari; Vazan, S; Darvishi, B; Golzardi, F; Farahani, M Esfini

    2011-01-01

    Living mulch is a suitable solution for weeds ecological management and is considered as an effective method in decreasing of weeds density and dry weight. In order to evaluate of mungbean living mulch effect on density and dry weight of weeds in corn field, an experiment was conducted as a split plot based on randomized complete block design with four blocks in Research Field of Department of Agronomy, Karaj Branch, Islamic Azad University in 2010. Main plots were time of mungbean suppression with 2,4-D herbicide in four levels (4, 6, 8 and 10 leaves stages of corn) and control without weeding and sub plots were densities of mungbean in three levels (50%, 100% and 150% more than optimum density). Density and dry weight of the weeds were measured in all plots with a quadrate (60 x 100 cm) in 10 days after tasseling. Totally, 9 species of weeds were identified in the field, which included 4 broad leave species that were existed in all plots. The results showed that the best time for suppression of mungbean is the 8 leaves stage of corn, which decreased density and dry weight of weeds, 53% and 51% in comparison with control, respectively. Increase of density of mungbean from 50% into 150% more than optimum density, decrease the density and dry weight of weeds, 27.5% and 22%, respectively. It is concluded that the best time and density for suppression mungbean was 8 leaves stage of corn, and 150% more than optimum density, which decreased density and dry weight of the weeds 69% and 63.5% in comparison with control, respectively.

  14. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  15. Potential Analysis of Rainfall-induced Sediment Disaster

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Wen; Chen, Yie-Ruey; Hsieh, Shun-Chieh; Tsai, Kuang-Jung; Chue, Yung-Sheng

    2014-05-01

    Most of the mountain regions in Taiwan are sedimentary and metamorphic rocks which are fragile and highly weathered. Severe erosion occurs due to intensive rainfall and rapid flow, the erosion is even worsen by frequent earthquakes and severely affects the stability of hillsides. Rivers are short and steep in Taiwan with large runoff differences in wet and dry seasons. Discharges respond rapidly with rainfall intensity and flood flows usually carry large amount of sediment. Because of the highly growth in economics and social change, the development in the slope land is inevitable in Taiwan. However, sediment disasters occur frequently in high and precipitous region during typhoon. To make the execution of the regulation of slope land development more efficiency, construction of evaluation model for sediment potential is very important. In this study, the Genetic Adaptive Neural Network (GANN) was implemented in texture analysis techniques for the classification of satellite images of research region before and after typhoon or extreme rainfall and to obtain surface information and hazard log data. By using GANN weight analysis, factors, levels and probabilities of disaster of the research areas are presented. Then, through geographic information system the disaster potential map is plotted to distinguish high potential regions from low potential regions. Finally, the evaluation processes for sediment disaster after rainfall due to slope land use are established. In this research, the automatic image classification and evaluation modules for sediment disaster after rainfall due to slope land disturbance and natural environment are established in MATLAB to avoid complexity and time of computation. After implementation of texture analysis techniques, the results show that the values of overall accuracy and coefficient of agreement of the time-saving image classification for different time periods are at intermediate-high level and above. The results of GANN show that the weight of building density is the largest in all slope land disturbance factors, followed by road density, orchard density, baren land density, vegetation density, and farmland density. The weight of geology is the largest in all natural environment factors, followed by slope roughness, slope, and elevation. Overlaying the locations of large sediment disaster in the past on the potential map predicted by GANN, we found that most damage areas were in the region with medium-high or high potential of landslide. Therefore, the proposed potential model of sediment disaster can be used in practice.

  16. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  17. Convergence and divergence across construction methods for human brain white matter networks: an assessment based on individual differences.

    PubMed

    Zhong, Suyu; He, Yong; Gong, Gaolang

    2015-05-01

    Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.

  18. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  19. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  20. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  1. Percolation of the site random-cluster model by Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Songsong; Zhang, Wanzhou; Ding, Chengxiang

    2015-08-01

    We propose a site random-cluster model by introducing an additional cluster weight in the partition function of the traditional site percolation. To simulate the model on a square lattice, we combine the color-assignation and the Swendsen-Wang methods to design a highly efficient cluster algorithm with a small critical slowing-down phenomenon. To verify whether or not it is consistent with the bond random-cluster model, we measure several quantities, such as the wrapping probability Re, the percolating cluster density P∞, and the magnetic susceptibility per site χp, as well as two exponents, such as the thermal exponent yt and the fractal dimension yh of the percolating cluster. We find that for different exponents of cluster weight q =1.5 , 2, 2.5 , 3, 3.5 , and 4, the numerical estimation of the exponents yt and yh are consistent with the theoretical values. The universalities of the site random-cluster model and the bond random-cluster model are completely identical. For larger values of q , we find obvious signatures of the first-order percolation transition by the histograms and the hysteresis loops of percolating cluster density and the energy per site. Our results are helpful for the understanding of the percolation of traditional statistical models.

  2. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  3. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  4. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  5. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  6. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  7. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  8. Effects of breastfeeding on postpartum weight loss among U.S. women

    PubMed Central

    Jarlenski, Marian P.; Bennett, Wendy L.; Bleich, Sara N.; Barry, Colleen L.; Stuart, Elizabeth A.

    2014-01-01

    Objective To evaluate the effects of breastfeeding on maternal weight loss in the 12 months postpartum among U.S. women. Methods Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2,102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12 months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Results Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3 months resulted in 3.2 pounds (95% CI: 1.4,4.7) greater weight loss at 12 months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Conclusion Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. PMID:25284261

  9. Effects of breastfeeding on postpartum weight loss among U.S. women.

    PubMed

    Jarlenski, Marian P; Bennett, Wendy L; Bleich, Sara N; Barry, Colleen L; Stuart, Elizabeth A

    2014-12-01

    The aim of this study is to evaluate the effects of breastfeeding on maternal weight loss in the 12months postpartum among U.S. women. Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3months resulted in 3.2 pound (95% CI: 1.4,4.7) greater weight loss at 12months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  11. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  12. Over, under, or about right: misperceptions of body weight among food stamp participants.

    PubMed

    Ver Ploeg, Michele L; Chang, Hung-Hao; Lin, Biing-Hwan

    2008-09-01

    The purpose of this research was to investigate the associations between misperception of body weight and sociodemographic factors such as food stamp participation status, income, education, and race/ethnicity. National Health and Nutrition Examination Survey (NHANES) data from 1999-2004 and multivariate logistic regression are used to estimate how sociodemographic factors are associated with (i) the probability that overweight adults misperceive themselves as healthy weight; (ii) the probability that healthy-weight adults misperceive themselves as underweight; and (iii) the probability that healthy-weight adults misperceive themselves as overweight. NHANES data are representative of the US civilian noninstitutionalized population. The analysis included 4,362 men and 4,057 women. BMI derived from measured weight and height was used to classify individuals as healthy weight or overweight. These classifications were compared with self-reported categorical weight status. We find that differences across sociodemographic characteristics in the propensity to underestimate or overestimate weight status were more pronounced for women than for men. Overweight female food stamp participants were more likely to underestimate weight status than income-eligible nonparticipants. Among healthy-weight and overweight women, non-Hispanic black and Mexican-American women, and women with less education were more likely to underestimate actual weight status. We found few differences across sociodemographic characteristics for men. Misperceptions of weight are common among both overweight and healthy-weight individuals and vary across socioeconomic and demographic groups. The nutrition education component of the Food Stamp Program could increase awareness of healthy body weight among participants.

  13. [Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].

    PubMed

    Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco

    2014-01-01

    the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.

  14. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  15. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  16. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  17. Evaluation of World Population-Weighted Effective Dose due to Cosmic Ray Exposure

    PubMed Central

    Sato, Tatsuhiko

    2016-01-01

    After the release of the Report of the United Nations Scientific Committee of the Effects of Atomic Radiation in 2000 (UNSCEAR2000), it became commonly accepted that the world population-weighted effective dose due to cosmic-ray exposure is 0.38 mSv, with a range from 0.3 to 2 mSv. However, these values were derived from approximate projections of altitude and geographic dependences of the cosmic-ray dose rates as well as the world population. This study hence re-evaluated the population-weighted annual effective doses and their probability densities for the entire world as well as for 230 individual nations, using a sophisticated cosmic-ray flux calculation model in tandem with detailed grid population and elevation databases. The resulting world population-weighted annual effective dose was determined to be 0.32 mSv, which is smaller than the UNSCEAR’s evaluation by 16%, with a range from 0.23 to 0.70 mSv covering 99% of the world population. These values were noted to vary with the solar modulation condition within a range of approximately 15%. All assessed population-weighted annual effective doses as well as their statistical information for each nation are provided in the supplementary files annexed to this report. These data improve our understanding of cosmic-ray radiation exposures to populations globally. PMID:27650664

  18. Risk Preferences, Probability Weighting, and Strategy Tradeoffs in Wildfire Management.

    PubMed

    Hand, Michael S; Wibbenmeyer, Matthew J; Calkin, David E; Thompson, Matthew P

    2015-10-01

    Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey-based experiment administered to federal wildfire managers. Respondents were presented with a multiattribute lottery-choice experiment where each lottery is defined by three outcome attributes: expenditures for fire suppression, damage to private property, and exposure of firefighters to the risk of aviation-related fatalities. Respondents choose one of two strategies, each of which includes "good" (low cost/low damage) and "bad" (high cost/high damage) outcomes that occur with varying probabilities. The choice task also incorporates an information framing experiment to test whether information about fatality risk to firefighters alters managers' responses to risk. Results suggest that managers exhibit risk aversion and nonlinear probability weighting, which can result in choices that do not minimize expected expenditures, property damage, or firefighter exposure. Information framing tends to result in choices that reduce the risk of aviation fatalities, but exacerbates nonlinear probability weighting. © 2015 Society for Risk Analysis.

  19. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form

    USGS Publications Warehouse

    Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.

    1979-01-01

    Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.

  20. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  1. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    PubMed

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  2. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression

    PubMed Central

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-01-01

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale. PMID:28397745

  3. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  4. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  5. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  6. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  7. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  8. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  9. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  10. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  11. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  12. The Agility Advantage: A Survival Guide for Complex Enterprises and Endeavors

    DTIC Science & Technology

    2011-09-01

    weighted probability 75th pctl weighted probability 13. Carman. K. G. and Kooreman, P, “ Flu Shots, Mammogram, and the Perception of Probabilities,” 2010...sharing capabilities until users can tes- tify to the benefi ts. This creates a chicken and egg situa- tion, because the consumers of information fi...Bibliography 563 Campen, Alan D. Look Closely at Network-Centric Warfare. Signal, January 2004. Carman, Katherine G., and Peter Kooreman, Peter. Flu Shots

  13. Cluster of atherosclerosis in a captive population of black kites (Milvus migrans subsp.) in France and effect of nutrition on the plasma lipid profile.

    PubMed

    Facon, Charles; Beaufrere, Hugues; Gaborit, Christophe; Albaric, Olivier; Plassiart, Georges; Ammersbach, Melanie; Liegeois, Jean-Louis

    2014-03-01

    From January 2010 to March 2013, a captive colony of 83 black kites (Milvus migrans subsp.) in France experienced increased mortality related to atherosclerosis with an incidence of 4.4% per year. On histopathology, all kites had advanced atherosclerotic lesions, with several birds presenting abdominal hemorrhage and aortic rupture. In January 2012, a dietary change was instituted and consisted of introducing fish into the kites' diet. During the following 15 mo, the plasma lipid profile was monitored as well as body weight, food offered, and flight activity. Total and low-density lipoprotein cholesterol initially increased, but in December 2012 and March 2013, an overall decrease from initial values was observed. High-density lipoprotein cholesterol also increased during this period. Despite positive plasma lipid changes induced by dietary modifications, there was no decrease in mortality from atherosclerosis, which was probably associated with the severity of the atherosclerotic lesions at time of dietary management. However, owing to the long and progressive development of atherosclerotic lesions, long-term beneficial effects are probable. This report suggests that black kites are particularly susceptible to atherosclerosis and aortic dissection in captivity. To prevent degenerative diseases associated with captivity in birds of prey, species-specific lifestyle and dietary requirements and susceptibility to these diseases should be considered.

  14. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  15. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  16. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  17. Nonlinear data assimilation using synchronization in a particle filter

    NASA Astrophysics Data System (ADS)

    Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan

    2017-04-01

    Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.

  18. Composition and structure of the Chironomidae (Insecta: Diptera) community associated with bryophytes in a first-order stream in the Atlantic forest, Brazil.

    PubMed

    Rosa, B F J V; Dias-Silva, M V D; Alves, R G

    2013-02-01

    This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.

  19. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  20. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  1. Flight and ground tests of a very low density elastomeric ablative material

    NASA Technical Reports Server (NTRS)

    Olsen, G. C.; Chapman, A. J., III

    1972-01-01

    A very low density ablative material, a silicone-phenolic composite, was flight tested on a recoverable spacecraft launched by a Pacemaker vehicle system; and, in addition, it was tested in an arc heated wind tunnel at three conditions which encompassed most of the reentry heating conditions of the flight tests. The material was composed, by weight, of 71 percent phenolic spheres, 22.8 percent silicone resin, 2.2 percent catalyst, and 4 percent silica fibers. The tests were conducted to evaluate the ablator performance in both arc tunnel and flight tests and to determine the predictability of the albator performance by using computed results from an existing one-dimensional numerical analysis. The flight tested ablator experienced only moderate surface recession and retained a smooth surface except for isolated areas where the char was completely removed, probably following reentry and prior to or during recovery. Analytical results show good agreement between arc tunnel and flight test results. The thermophysical properties used in the analysis are tabulated.

  2. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  3. Risk preferences, probability weighting, and strategy tradeoffs in wildfire management

    Treesearch

    Michael S. Hand; Matthew J. Wibbenmeyer; Dave Calkin; Matthew P. Thompson

    2015-01-01

    Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey-based experiment administered to...

  4. Effects of sodium bicarbonate and albumin on the in vitro water-holding capacity and some physiological properties of Trigonella foenum graecum L. galactomannan in rats.

    PubMed

    Dakam, William; Shang, Judith; Agbor, Gabriel; Oben, Julius

    2007-03-01

    This study seeks to improve the beneficial effects of fenugreek (Trigonella foenum graecum L.) galactomannan (GM) in lowering the plasma lipid profile and weight. Three different combinations of diets were prepared with fenugreek GM--(a) fenugreek GM + water (GM); (b) fenugreek GM + sodium bicarbonate (GMB); and (c) fenugreek GM + bicarbonate + albumin (GMBA)--and their in vitro water retention capacity and in vivo lipid-lowering effect were studied. Distilled water and sodium bicarbonate were used as controls. The sodium bicarbonate significantly increased the in vitro water-holding capacity of fenugreek GM (49.1 +/- 8.7 vs. 21.6 +/- 0.9 g of water/g of dry weight, P < .01). Administration by oral intubation of the combination GMBA to male albino Wistar rats (250 mg/kg of body weight) over a 4-week period was the most effective in reducing body weight (-27.0 +/- 0.4%, P < .001). Within this period, the combinations GMBA and GMB brought about the most significant reduction in the levels of plasma total cholesterol (P < .005). The GMBA combination was also the most effective in reducing levels of plasma low-density lipoprotein cholesterol (P < .001) and the atherogenicity indices. GM, GMB, and GMBA brought about significant (P < .01, .001, and .001, respectively) increases in the plasma high-density lipoprotein cholesterol levels, with the highest increase coming with GMBA. A significant increase in plasma triglycerides (P < .05) was brought about by the GMBA combination, probably resulting from the rapid reduction of body weight observed. Food intake was reduced by GM, GMB, and GMBA, while water intake increased in that order. The GMB combination significantly reduced transit time (P < .01) compared to GM. On the other hand, GMB and GMBA improved glycemic control, compared to GM. We conclude that albumin and sodium bicarbonate have the ability to improve some beneficial physiological effects of fenugreek GM. This finding could have applications in the areas of human obesity, weight loss, and the control of blood lipids.

  5. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-Domains

    PubMed Central

    Costes, Sylvain V; Ponomarev, Artem; Chen, James L; Nguyen, David; Cucinotta, Francis A; Barcellos-Hoff, Mary Helen

    2007-01-01

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage occurs. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM, and γH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation and low LET. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by “relative DNA image measurements.” This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that DNA damage–induced foci are restricted to certain regions of the nucleus of human epithelial cells. It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. PMID:17676951

  6. Link between Food Energy Density and Body Weight Changes in Obese Adults

    PubMed Central

    Stelmach-Mardas, Marta; Rodacki, Tomasz; Dobrowolska-Iwanek, Justyna; Brzozowska, Anna; Walkowiak, Jarosław; Wojtanowska-Krosniak, Agnieszka; Zagrodzki, Paweł; Bechthold, Angela; Mardas, Marcin; Boeing, Heiner

    2016-01-01

    Regulating the energy density of food could be used as a novel approach for successful body weight reduction in clinical practice. The aim of this study was to conduct a systemic review of the literature on the relationship between food energy density and body weight changes in obese adults to obtain solid evidence supporting this approach. The search process was based on the selection of publications in the English language listed in public databases. A meta-analysis was performed to combine individual study results. Thirteen experimental and observational studies were identified and included in the final analysis. The analyzed populations consist of 3628 individuals aged 18 to 66 years. The studies varied greatly in terms of study populations, study design and applied dietary approaches. The meta-analysis revealed a significant association between low energy density foods and body weight reduction, i.e., −0.53 kg when low energy density foods were eaten (95% CI: −0.88, −0.19). In conclusions, this study adds evidence which supports the energy density of food as a simple but effective measure to manage weight in the obese with the aim of weight reduction. PMID:27104562

  7. Associations Between Dietary Energy Density in Mothers and Growth of Breastfeeding Infants During the First 4 Months of Life.

    PubMed

    Moradi, Maedeh; Maracy, Mohammad R; Esmaillzadeh, Ahmad; Surkan, Pamela J; Azadbakht, Leila

    2018-05-31

    Despite the overwhelming impact of dietary energy density on the quality of the entire diet, no research has investigated dietary energy density among lactating mothers. Hence, the present study was undertaken to assess the influence of maternal dietary energy density during lactation on infant growth. Three hundred healthy lactating mother-infant pairs were enrolled in the study. Detailed demographic information and dietary intake data were collected from the lactating mothers. Anthropometric features such as infant weight, height, and head circumference at birth and 2 and 4 months and mother's pregnancy and postpartum weight and height were derived from health center records. Data on physical activity were reported using the International Physical Activity Questionnaire. After adjusting for confounding variables, infant weight, length, weight-for-height, and head circumference at birth, 2 months, and 4 months did not show significant differences among four dietary energy density categories (all p values > 0.01). Our study showed no association among quartiles of dietary energy density among lactating mothers and infant weight, length, weight-for-height, and head circumference growth by 2 and 4 months of age.

  8. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  9. Evolution of increased adult longevity in Drosophila melanogaster populations selected for adaptation to larval crowding.

    PubMed

    Shenoi, V N; Ali, S Z; Prasad, N G

    2016-02-01

    In holometabolous animals such as Drosophila melanogaster, larval crowding can affect a wide range of larval and adult traits. Adults emerging from high larval density cultures have smaller body size and increased mean life span compared to flies emerging from low larval density cultures. Therefore, adaptation to larval crowding could potentially affect adult longevity as a correlated response. We addressed this issue by studying a set of large, outbred populations of D. melanogaster, experimentally evolved for adaptation to larval crowding for 83 generations. We assayed longevity of adult flies from both selected (MCUs) and control populations (MBs) after growing them at different larval densities. We found that MCUs have evolved increased mean longevity compared to MBs at all larval densities. The interaction between selection regime and larval density was not significant, indicating that the density dependence of mean longevity had not evolved in the MCU populations. The increase in longevity in MCUs can be partially attributed to their lower rates of ageing. It is also noteworthy that reaction norm of dry body weight, a trait probably under direct selection in our populations, has indeed evolved in MCU populations. To the best of our knowledge, this is the first report of the evolution of adult longevity as a correlated response of adaptation to larval crowding. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  10. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  11. Factors Influencing Ball-Player Impact Probability in Youth Baseball

    PubMed Central

    Matta, Philip A.; Myers, Joseph B.; Sawicki, Gregory S.

    2015-01-01

    Background: Altering the weight of baseballs for youth play has been studied out of concern for player safety. Research has shown that decreasing the weight of baseballs may limit the severity of both chronic arm and collision injuries. Unfortunately, reducing the weight of the ball also increases its exit velocity, leaving pitchers and nonpitchers with less time to defend themselves. The purpose of this study was to examine impact probability for pitchers and nonpitchers. Hypothesis: Reducing the available time to respond by 10% (expected from reducing ball weight from 142 g to 113 g) would increase impact probability for pitchers and nonpitchers, and players’ mean simple response time would be a primary predictor of impact probability for all participants. Study Design: Nineteen subjects between the ages of 9 and 13 years performed 3 experiments in a controlled laboratory setting: a simple response time test, an avoidance response time test, and a pitching response time test. Methods: Each subject performed these tests in order. The simple reaction time test tested the subjects’ mean simple response time, the avoidance reaction time test tested the subjects’ ability to avoid a simulated batted ball as a fielder, and the pitching reaction time test tested the subjects’ ability to avoid a simulated batted ball as a pitcher. Results: Reducing the weight of a standard baseball from 142 g to 113 g led to a less than 5% increase in impact probability for nonpitchers. However, the results indicate that the impact probability for pitchers could increase by more than 25%. Conclusion: Pitching may greatly increase the amount of time needed to react and defend oneself from a batted ball. Clinical Relevance: Impact injuries to youth baseball players may increase if a 113-g ball is used. PMID:25984261

  12. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  13. Sprayable low density ablator and application process

    NASA Technical Reports Server (NTRS)

    Sharpe, M. H.; Hill, W. E.; Simpson, W. G.; Carter, J. M.; Brown, E. L.; King, H. M.; Schuerer, P. H.; Webb, D. D. (Inventor)

    1978-01-01

    A sprayable, low density ablative composition is described consisting esentially of: (1) 100 parts by weight of a mixture of 25-65% by weight of phenolic microballoons, 0-20% by weight of glass microballoons, 4-10% by weight of glass fibers, 25-45% by weight of an epoxy-modified polyurethane resin, 2-4% by weight of a bentonite dispersing aid, and 1-2% by weight of an alcohol activator for the bentonite; (2) 1-10 parts by weight of an aromatic amine curing agent; and (3) 200-400 parts by weight of a solvent.

  14. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  15. Impact of Adult Weight, Density, and Age on Reproduction of Tenebrio molitor (Coleoptera: Tenebrionidae)

    USDA-ARS?s Scientific Manuscript database

    The impact of adult weight, age, and density on reproduction of Tenebrio molitor L. (Coleoptera: Tenebrionidae) was studied. The impact of adult weight on reproduction was determined in two ways: 1) counting the daily progeny of individual adult pairs of known weight and analyzing the data with line...

  16. Estimating tree bole and log weights from green densities measured with the Bergstrom Xylodensimeter.

    Treesearch

    Dale R. Waddell; Michael B. Lambert; W.Y. Pong

    1984-01-01

    The performance of the Bergstrom xylodensimeter, designed to measure the green density of wood, was investigated and compared with a technique that derived green densities from wood disk samples. In addition, log and bole weights of old-growth Douglas-fir and western hemlock were calculated by various formulas and compared with lifted weights measured with a load cell...

  17. Quasar probabilities and redshifts from WISE mid-IR through GALEX UV photometry

    NASA Astrophysics Data System (ADS)

    DiPompeo, M. A.; Bovy, J.; Myers, A. D.; Lang, D.

    2015-09-01

    Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colours are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced photometry on public WISE data at the locations of Sloan Digital Sky Survey (SDSS) point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at z > 2. The use of SDSS+WISE photometry is comparable to the use of SDSS+ultraviolet+near-IR data. We release a new public catalogue of 5537 436 (total; 3874 639 weighted by probability) potential quasars with probability PQSO > 0.2. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE colour-cuts, especially at high redshift. Our fits verify that above z ˜ 3 WISE colours become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE colour-cuts identify in significant numbers.

  18. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  19. Energy density of lake whitefish Coregonus clupeaformis in Lakes Huron and Michigan

    USGS Publications Warehouse

    Pothoven, S.A.; Nalepa, T.F.; Madenjian, C.P.; Rediske, R.R.; Schneeberger, P.J.; He, J.X.

    2006-01-01

    We collected lake whitefish Coregonus clupeaformis off Alpena and Tawas City, Michigan, USA in Lake Huron and off Muskegon, Michigan USA in Lake Michigan during 2002–2004. We determined energy density and percent dry weight for lake whitefish from both lakes and lipid content for Lake Michigan fish. Energy density increased with increasing fish weight up to 800 g, and then remained relatively constant with further increases in fish weight. Energy density, adjusted for weight, was lower in Lake Huron than in Lake Michigan for both small (≤800 g) and large fish (>800 g). Energy density did not differ seasonally for small or large lake whitefish or between adult male and female fish. Energy density was strongly correlated with percent dry weight and percent lipid content. Based on data from commercially caught lake whitefish, body condition was lower in Lake Huron than Lake Michigan during 1981–2003, indicating that the dissimilarity in body condition between the lakes could be long standing. Energy density and lipid content in 2002–2004 in Lake Michigan were lower than data for comparable sized fish collected in 1969–1971. Differences in energy density between lakes were attributed to variation in diet and prey energy content as well as factors that affect feeding rates such as lake whitefish density and prey abundance.

  20. Ratio-of-Mediator-Probability Weighting for Causal Mediation Analysis in the Presence of Treatment-by-Mediator Interaction

    ERIC Educational Resources Information Center

    Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.

    2015-01-01

    Conventional methods for mediation analysis generate biased results when the mediator-outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…

  1. Ratio-of-Mediator-Probability Weighting for Causal Mediation Analysis in the Presence of Treatment-by-Mediator Interaction

    ERIC Educational Resources Information Center

    Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.

    2015-01-01

    Conventional methods for mediation analysis generate biased results when the mediator--outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…

  2. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  3. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  4. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  5. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  6. THE CHANDRA COSMOS-LEGACY SURVEY: THE z > 3 SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesi, S.; Civano, F.; Urry, C. M.

    2016-08-20

    We present the largest high-redshift (3 < z < 6.85) sample of X-ray-selected active galactic nuclei (AGNs) on a contiguous field, using sources detected in the Chandra COSMOS-Legacy survey. The sample contains 174 sources, 87 with spectroscopic redshift and the other 87 with photometric redshift (z {sub phot}). In this work, we treat z {sub phot} as a probability-weighted sum of contributions, adding to our sample the contribution of sources with z {sub phot} < 3 but z {sub phot} probability distribution >0 at z > 3. We compute the number counts in the observed 0.5–2 keV band, finding amore » decline in the number of sources at z > 3 and constraining phenomenological models of the X-ray background. We compute the AGN space density at z > 3 in two different luminosity bins. At higher luminosities (log L (2–10 keV) > 44.1 erg s{sup −1}), the space density declines exponentially, dropping by a factor of ∼20 from z ∼ 3 to z ∼ 6. The observed decline is ∼80% steeper at lower luminosities (43.55 erg s{sup −1} < logL(2–10 keV) < 44.1 erg s{sup −1}) from z ∼ 3 to z ∼ 4.5. We study the space density evolution dividing our sample into optically classified Type 1 and Type 2 AGNs. At log L (2–10 keV) > 44.1 erg s{sup −1}, unobscured and obscured objects may have different evolution with redshift, with the obscured component being three times higher at z ∼ 5. Finally, we compare our space density with predictions of quasar activation merger models, whose calibration is based on optically luminous AGNs. These models significantly overpredict the number of expected AGNs at log L (2–10 keV) > 44.1 erg s{sup −1} with respect to our data.« less

  7. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Weighted Configuration Model and Inhomogeneous Epidemics

    NASA Astrophysics Data System (ADS)

    Britton, Tom; Deijfen, Maria; Liljeros, Fredrik

    2011-12-01

    A random graph model with prescribed degree distribution and degree dependent edge weights is introduced. Each vertex is independently equipped with a random number of half-edges and each half-edge is assigned an integer valued weight according to a distribution that is allowed to depend on the degree of its vertex. Half-edges with the same weight are then paired randomly to create edges. An expression for the threshold for the appearance of a giant component in the resulting graph is derived using results on multi-type branching processes. The same technique also gives an expression for the basic reproduction number for an epidemic on the graph where the probability that a certain edge is used for transmission is a function of the edge weight (reflecting how closely `connected' the corresponding vertices are). It is demonstrated that, if vertices with large degree tend to have large (small) weights on their edges and if the transmission probability increases with the edge weight, then it is easier (harder) for the epidemic to take off compared to a randomized epidemic with the same degree and weight distribution. A recipe for calculating the probability of a large outbreak in the epidemic and the size of such an outbreak is also given. Finally, the model is fitted to three empirical weighted networks of importance for the spread of contagious diseases and it is shown that R 0 can be substantially over- or underestimated if the correlation between degree and weight is not taken into account.

  9. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  10. The significance of placental ratios in pregnancies complicated by small for gestational age, preeclampsia, and gestational diabetes mellitus.

    PubMed

    Kim, Hee Sun; Cho, Soo Hyun; Kwon, Han Sung; Sohn, In Sook; Hwang, Han Sung

    2014-09-01

    This study aimed to evaluate the placental weight, volume, and density, and investigate the significance of placental ratios in pregnancies complicated by small for gestational age (SGA), preeclampsia (PE), and gestational diabetes mellitus (GDM). Two hundred and fifty-four pregnant women were enrolled from August 2005 through July 2013. Participants were divided into four groups: control (n=82), SGA (n=37), PE (n=102), and GDM (n=33). The PE group was classified as PE without intrauterine growth restriction (n=65) and PE with intrauterine growth restriction (n=37). Birth weight, placental weight, placental volume, placental density, and placental ratios including birth weight/placental weight ratio (BPW) and birth weight/placental volume ratio (BPV) were compared between groups. Birth weight, placental weight, and placental volume were lower in the SGA group than in the control group. However, the BPW and BPV did not differ between the two groups. Birth weight, placental weight, placental volume, BPW, and BPV were all significantly lower in the PE group than in the control group. Compared with the control group, birth weight, BPW, and BPV were higher in the GDM group, whereas placental weight and volume did not differ in the two groups. Placental density was not significantly different among the four groups. Placental ratios based on placental weight, placental volume, placental density, and birth weight are helpful in understanding the pathophysiology of complicated pregnancies. Moreover, they can be used as predictors of pregnancy complications.

  11. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  12. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  13. Dietary energy density and body weight in adults and children: a systematic review.

    PubMed

    Pérez-Escamilla, Rafael; Obbagy, Julie E; Altman, Jean M; Essery, Eve V; McGrane, Mary M; Wong, Yat Ping; Spahn, Joanne M; Williams, Christine L

    2012-05-01

    Energy density is a relatively new concept that has been identified as an important factor in body weight control in adults and in children and adolescents. The Dietary Guidelines for Americans 2010 encourages consumption of an eating pattern low in energy density to manage body weight. This article describes the systematic evidence-based review conducted by the 2010 Dietary Guidelines Advisory Committee (DGAC), with support from the US Department of Agriculture's Nutrition Evidence Library, which resulted in this recommendation. An update to the committee's review was prepared for this article. PubMed was searched for English-language publications from January 1980 to May 2011. The literature review included 17 studies (seven randomized controlled trials, one nonrandomized controlled trial, and nine cohort studies) in adults and six cohort studies in children and adolescents. Based on this evidence, the 2010 Dietary Guidelines Advisory Committee concluded that strong and consistent evidence in adults indicates that dietary patterns relatively low in energy density improve weight loss and weight maintenance. In addition, the committee concluded that there was moderately strong evidence from methodologically rigorous longitudinal cohort studies in children and adolescents to suggest that there is a positive association between dietary energy density and increased adiposity. This review supports a relationship between energy density and body weight in adults and in children and adolescents such that consuming diets lower in energy density may be an effective strategy for managing body weight. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  14. Gas sorption and barrier properties of polymeric membranes from molecular dynamics and Monte Carlo simulations.

    PubMed

    Cozmuta, Ioana; Blanco, Mario; Goddard, William A

    2007-03-29

    It is important for many industrial processes to design new materials with improved selective permeability properties. Besides diffusion, the molecule's solubility contributes largely to the overall permeation process. This study presents a method to calculate solubility coefficients of gases such as O2, H2O (vapor), N2, and CO2 in polymeric matrices from simulation methods (Molecular Dynamics and Monte Carlo) using first principle predictions. The generation and equilibration (annealing) of five polymer models (polypropylene, polyvinyl alcohol, polyvinyl dichloride, polyvinyl chloride-trifluoroethylene, and polyethylene terephtalate) are extensively described. For each polymer, the average density and Hansen solubilities over a set of ten samples compare well with experimental data. For polyethylene terephtalate, the average properties between a small (n = 10) and a large (n = 100) set are compared. Boltzmann averages and probability density distributions of binding and strain energies indicate that the smaller set is biased in sampling configurations with higher energies. However, the sample with the lowest cohesive energy density from the smaller set is representative of the average of the larger set. Density-wise, low molecular weight polymers tend to have on average lower densities. Infinite molecular weight samples do however provide a very good representation of the experimental density. Solubility constants calculated with two ensembles (grand canonical and Henry's constant) are equivalent within 20%. For each polymer sample, the solubility constant is then calculated using the faster (10x) Henry's constant ensemble (HCE) from 150 ps of NPT dynamics of the polymer matrix. The influence of various factors (bad contact fraction, number of iterations) on the accuracy of Henry's constant is discussed. To validate the calculations against experimental results, the solubilities of nitrogen and carbon dioxide in polypropylene are examined over a range of temperatures between 250 and 650 K. The magnitudes of the calculated solubilities agree well with experimental results, and the trends with temperature are predicted correctly. The HCE method is used to predict the solubility constants at 298 K of water vapor and oxygen. The water vapor solubilities follow more closely the experimental trend of permeabilities, both ranging over 4 orders of magnitude. For oxygen, the calculated values do not follow entirely the experimental trend of permeabilities, most probably because at this temperature some of the polymers are in the glassy regime and thus are diffusion dominated. Our study also concludes large confidence limits are associated with the calculated Henry's constants. By investigating several factors (terminal ends of the polymer chains, void distribution, etc.), we conclude that the large confidence limits are intimately related to the polymer's conformational changes caused by thermal fluctuations and have to be regarded--at least at microscale--as a characteristic of each polymer and the nature of its interaction with the solute. Reducing the mobility of the polymer matrix as well as controlling the distribution of the free (occupiable) volume would act as mechanisms toward lowering both the gas solubility and the diffusion coefficients.

  15. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  16. Economic Choices Reveal Probability Distortion in Macaque Monkeys

    PubMed Central

    Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-01-01

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750

  17. Economic choices reveal probability distortion in macaque monkeys.

    PubMed

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  18. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons

    PubMed Central

    Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram

    2017-01-01

    Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508

  19. Follicle-stimulating hormone and bioavailable estradiol are less important than weight and race in determining bone density in younger postmenopausal women

    PubMed Central

    Preisser, J. S.; Hammett-Stabler, C. A.; Renner, J. B.; Rubin, J.

    2011-01-01

    Summary The association between follicle-stimulating hormone (FSH) and bone density was tested in 111 postmenopausal women aged 50–64 years. In the multivariable analysis, weight and race were important determinants of bone mineral density. FSH, bioavailable estradiol, and other hormonal variables did not show statistically significant associations with bone density at any site. Introduction FSH has been associated with bone density loss in animal models and longitudinal studies of women. Most of these analyses have not considered the effect of weight or race. Methods We tested the association between FSH and bone density in younger postmenopausal women, adjusting for patient-related factors. In 111 postmenopausal women aged 50–64 years, areal bone mineral density (BMD) was measured at the lumbar spine, femoral neck, total hip, and distal radius using dual-energy X-ray absorptiometry, and volumetric BMD was measured at the distal radius using peripheral quantitative computed tomography (pQCT). Height, weight, osteoporosis risk factors, and serum hormonal factors were assessed. Results FSH inversely correlated with weight, bioavailable estradiol, areal BMD at the lumbar spine and hip, and volumetric BMD at the ultradistal radius. In the multivariable analysis, no hormonal variable showed a statistically significant association with areal BMD at any site. Weight was independently associated with BMD at all central sites (p<0.001), but not with BMD or pQCT measures at the distal radius. Race was independently associated with areal BMD at all sites (p≤0.008) and with cortical area at the 33% distal radius (p=0.004). Conclusions Correlations between FSH and bioavailable estradiol and BMD did not persist after adjustment for weight and race in younger postmenopausal women. Weight and race were more important determinants of bone density and should be included in analyses of hormonal influences on bone. PMID:21125395

  20. In healthy elderly postmenopausal women variations in BMD and BMC at various skeletal sites are associated with differences in weight and lean body mass rather than by variations in habitual physical activity, strength or VO2max.

    PubMed

    Schöffl, I; Kemmler, W; Kladny, B; Vonstengel, S; Kalender, W A; Engelke, K

    2008-01-01

    The objective of this study was an integrated cross-sectional investigation for answering the question whether differences in bone mineral density in elderly postmenopausal women are associated with differences in habitual physical activity and unspecific exercise levels. Two hundred and ninety nine elderly women (69-/+3 years), without diseases or medication affecting bone metabolism were investigated. The influence of weight, body composition and physical activity on BMD was measured at multiple sites using different techniques (DXA, QCT, and QUS). Physical activity and exercise level were assessed by questionnaire, maximum strength of the legs and aerobic capacity. Variations in physical activity or habitual exercise had no effect on bone. The only significant univariate relation between strength/VO(2)max and BMD/BMC that remained after adjusting for confounding variables was between arm BMD (DXA) and hand-grip strength. The most important variable for explaining BMD was weight and for cortical BMC of the femur (QCT) lean body mass. Weight and lean body mass emerge as predominant predictors of BMD in normal elderly women, whereas the isolated effect of habitual physical activity, unspecific exercise participation, and muscle strength on bone parameters is negligible. Thus, an increase in the amount of habitual physical activity will probably have no beneficial impact on bone.

  1. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  2. Southern pine beetle infestation probability mapping using weights of evidence analysis

    Treesearch

    Jason B. Grogan; David L. Kulhavy; James C. Kroll

    2010-01-01

    Weights of Evidence (WofE) spatial analysis was used to predict probability of southern pine beetle (Dendroctonus frontalis) (SPB) infestation in Angelina, Nacogdoches, San Augustine and Shelby Co., TX. Thematic data derived from Landsat imagery (1974–2002 Landsat 1–7) were used. Data layers included: forest covertype, forest age, forest patch size...

  3. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  4. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-08-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  5. Novel bacterial consortia isolated from plastic garbage processing areas demonstrated enhanced degradation for low density polyethylene.

    PubMed

    Skariyachan, Sinosh; Manjunatha, Vishal; Sultana, Subiya; Jois, Chandana; Bai, Vidya; Vasist, Kiran S

    2016-09-01

    This study aimed to formulate novel microbial consortia isolated from plastic garbage processing areas and thereby devise an eco-friendly approach for enhanced degradation of low-density polyethylene (LDPE). The LDPE degrading bacteria were screened and microbiologically characterized. The best isolates were formulated as bacterial consortia, and degradation efficiency was compared with the consortia formulated using known isolates obtained from the Microbial Culture Collection Centre (MTCC). The degradation products were analyzed by FTIR, GC-FID, tensile strength, and SEM. The bacterial consortia were characterized by 16S ribosomal DNA (rDNA) sequencing. The formulated bacterial consortia demonstrated 81 ± 4 and 38 ± 3 % of weight reduction for LDPE strips and LDPE pellets, respectively, over a period of 120 days. However, the consortia formulated by MTCC strains demonstrated 49 ± 4 and 20 ± 2 % of weight reduction for LDPE strips and pellets, respectively, for the same period. Furthermore, the three isolates in its individual application exhibited 70 ± 4, 68 ± 4, and 64 ± 4 % weight reduction for LDPE strips and 21 ± 2, 28 ± 2, 24 ± 2 % weight reduction for LDPE pellets over a period of 120 days (p < 0.05). The end product analysis showed structural changes and formation of bacterial film on degraded LDPE strips. The 16S rDNA characterization of bacterial consortia revealed that these organisms were novel strains and designated as Enterobacter sp. bengaluru-btdsce01, Enterobacter sp. bengaluru-btdsce02, and Pantoea sp. bengaluru-btdsce03. The current study thus suggests that industrial scale-up of these microbial consortia probably provides better insights for waste management of LDPE and similar types of plastic garbage.

  6. Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach

    NASA Astrophysics Data System (ADS)

    Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng

    2009-02-01

    Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.

  7. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  8. Concepts and Bounded Rationality: An Application of Niestegge's Approach to Conditional Quantum Probabilities

    NASA Astrophysics Data System (ADS)

    Blutner, Reinhard

    2009-03-01

    Recently, Gerd Niestegge developed a new approach to quantum mechanics via conditional probabilities developing the well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization. I will apply his powerful and rigorous approach to the treatment of concepts using a geometrical model of meaning. In this model, instances are treated as vectors of a Hilbert space H. In the present approach there are at least two possibilities to form categories. The first possibility sees categories as a mixture of its instances (described by a density matrix). In the simplest case we get the classical probability theory including the Bayesian formula. The second possibility sees categories formed by a distinctive prototype which is the superposition of the (weighted) instances. The construction of prototypes can be seen as transferring a mixed quantum state into a pure quantum state freezing the probabilistic characteristics of the superposed instances into the structure of the formed prototype. Closely related to the idea of forming concepts by prototypes is the existence of interference effects. Such inference effects are typically found in macroscopic quantum systems and I will discuss them in connection with several puzzles of bounded rationality. The present approach nicely generalizes earlier proposals made by authors such as Diederik Aerts, Andrei Khrennikov, Ricardo Franco, and Jerome Busemeyer. Concluding, I will suggest that an active dialogue between cognitive approaches to logic and semantics and the modern approach of quantum information science is mandatory.

  9. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  10. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  11. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  12. Products of Chemistry: Alkanes: Abundant, Pervasive, Important, and Essential.

    ERIC Educational Resources Information Center

    Seymour, Raymond B.

    1989-01-01

    Discusses the history and commercialization of alkanes. Examines the nomenclature and uses of alkanes. Studies polymerization and several types of polyethylenes: low-density, high-density, low-molecular-weight, cross-linked, linear low-density, and ultrahigh-molecular-weight. Includes a glossary of hydrocarbon terms. (MVL)

  13. NASA Glenn Research Center Program in High Power Density Motors for Aeropropulsion

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Kascak, Albert F.; Ebihara, Ben; Johnson, Dexter; Choi, Benjamin; Siebert, Mark; Buccieri, Carl

    2005-01-01

    Electric drive of transport-sized aircraft propulsors, with electric power generated by fuel cells or turbo-generators, will require electric motors with much higher power density than conventional room-temperature machines. Cryogenic cooling of the motor windings by the liquid hydrogen fuel offers a possible solution, enabling motors with higher power density than turbine engines. Some context on weights of various systems, which is required to assess the problem, is presented. This context includes a survey of turbine engine weights over a considerable size range, a correlation of gear box weights and some examples of conventional and advanced electric motor weights. The NASA Glenn Research Center program for high power density motors is outlined and some technical results to date are presented. These results include current densities of 5,000 A per square centimeter current density achieved in cryogenic coils, finite element predictions compared to measurements of torque production in a switched reluctance motor, and initial tests of a cryogenic switched reluctance motor.

  14. Effects of Vertex Activity and Self-organized Criticality Behavior on a Weighted Evolving Network

    NASA Astrophysics Data System (ADS)

    Zhang, Gui-Qing; Yang, Qiu-Ying; Chen, Tian-Lun

    2008-08-01

    Effects of vertex activity have been analyzed on a weighted evolving network. The network is characterized by the probability distribution of vertex strength, each edge weight and evolution of the strength of vertices with different vertex activities. The model exhibits self-organized criticality behavior. The probability distribution of avalanche size for different network sizes is also shown. In addition, there is a power law relation between the size and the duration of an avalanche and the average of avalanche size has been studied for different vertex activities.

  15. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  17. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  18. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  19. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  20. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  1. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  2. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  3. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  4. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  5. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  6. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  7. Is Weight Training Safe during Pregnancy?

    ERIC Educational Resources Information Center

    Work, Janis A.

    1989-01-01

    Examines the opinions of several experts on the safety of weight training during pregnancy, noting that no definitive research on weight training alone has been done. Experts agree that low-intensity weight training probably poses no harm for mother or fetus; exercise programs should be individualized. (SM)

  8. Synthetic CT for MRI-based liver stereotactic body radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Bredfeldt, Jeremy S.; Liu, Lianli; Feng, Mary; Cao, Yue; Balter, James M.

    2017-04-01

    A technique for generating MRI-derived synthetic CT volumes (MRCTs) is demonstrated in support of adaptive liver stereotactic body radiation therapy (SBRT). Under IRB approval, 16 subjects with hepatocellular carcinoma were scanned using a single MR pulse sequence (T1 Dixon). Air-containing voxels were identified by intensity thresholding on T1-weighted, water and fat images. The envelope of the anterior vertebral bodies was segmented from the fat image and fuzzy-C-means (FCM) was used to classify each non-air voxel as mid-density, lower-density, bone, or marrow in the abdomen, with only bone and marrow classified within the vertebral body envelope. MRCT volumes were created by integrating the product of the FCM class probability with its assigned class density for each voxel. MRCTs were deformably aligned with corresponding planning CTs and 2-ARC-SBRT-VMAT plans were optimized on MRCTs. Fluence was copied onto the CT density grids, dose recalculated, and compared. The liver, vertebral bodies, kidneys, spleen and cord had median Hounsfield unit differences of less than 60. Median target dose metrics were all within 0.1 Gy with maximum differences less than 0.5 Gy. OAR dose differences were similarly small (median: 0.03 Gy, std:0.26 Gy). Results demonstrate that MRCTs derived from a single abdominal imaging sequence are promising for use in SBRT dose calculation.

  9. Genetic evaluation of weaning weight and probability of lambing at 1 year of age in Targhee lambs

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to investigate genetic control of 120-day weaning weight and the probability of lambing at 1 year of age in Targhee ewe lambs. Records of 5,967 ewe lambs born from 1989 to 2012 and first exposed to rams for breeding at approximately 7 months of age were analyzed. Reco...

  10. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  11. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  12. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  13. Effects of snack consumption for 8 weeks on energy intake and body weight.

    PubMed

    Viskaal-van Dongen, M; Kok, F J; de Graaf, C

    2010-02-01

    Consumption of snacks might contribute to the obesity epidemic. It is not clear how the moment of consumption and energy density of snacks can influence the compensatory response to consumption of snacks in the long term. To investigate the effects of snack consumption for 8 weeks on changes in body weight, emphasizing on moment of consumption and energy density. In total, 16 men and 66 women (mean age 21.9 years (s.d. 0.3 year), mean body mass index 20.7 kg m(-2) (s.d. 0.2 kg m(-2))) were randomly assigned to one of four parallel groups in a 2 x 2 design: snacks consumed with or between meals and snacks having a low (<4 kJ g(-1)) or high (>12 kJ g(-1)) energy density. For 8 weeks, subjects consumed mandatory snacks that provided 25% of energy requirements on each day. Body weight, body composition, physical activity level (PAL) and energy intake were measured in week 1 and week 8. There were no differences in changes in body weight between the four groups. Moment of consumption (P=0.7), energy density (P=0.8) and interaction (P=0.09) did not influence body weight. Similarly, there were no differences in changes in body composition, PAL and energy intake between the four groups. Body weight after 8 weeks of snack consumption was not affected by moment of consumption and energy density of snacks. This finding suggests that consuming snacks that are high or low in energy density does not necessarily contribute to weight gain. Healthy, nonobese young adults may be able to maintain a normal body weight through an accurate compensation for the consumption of snacks.

  14. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  15. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  16. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  17. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  18. On the number of infinite geodesics and ground states in disordered systems

    NASA Astrophysics Data System (ADS)

    Wehr, Jan

    1997-04-01

    We study first-passage percolation models and their higher dimensional analogs—models of surfaces with random weights. We prove that under very general conditions the number of lines or, in the second case, hypersurfaces which locally minimize the sum of the random weights is with probability one equal to 0 or with probability one equal to +∞. As corollaries we show that in any dimension d≥2 the number of ground states of an Ising ferromagnet with random coupling constants equals (with probability one) 2 or +∞. Proofs employ simple large-deviation estimates and ergodic arguments.

  19. Effects of genotype and population density on growth performance, carcass characteristics, and cost-benefits of broiler chickens in north central Nigeria.

    PubMed

    Yakubu, Abdulmojeed; Ayoade, John A; Dahiru, Yakubu M

    2010-04-01

    The influence of genotype and stocking densities on growth performance, carcass qualities, and cost-benefits of broilers were examined in a 28-day trial. Two hundred and seven 4-week-old birds each of Anak Titan and Arbor Acre hybrid broiler types were randomly assigned to three stocking density treatments of 8.3, 11.1, and 14.3 birds/m(2) in a 2 x 3 factorial arrangement. Final body weight, average weekly body weight and average weekly feed intake were affected (P < 0.05) by strain, with higher means recorded for Arbor Acres. However, average weekly body weight gain and feed conversion ratio were similar (P > 0.05) in both genetic groups. The effect of placement density on some growth parameters did not follow a linear trend. Arbor Acres had significantly (P < 0.05) higher relative (%) fasted body, carcass, back, neck, and wing weights compared to Anak Titans. Housing density effect (P < 0.05) was observed for relative (%) fasted body, shank, and wing weights of birds. However, the relative weights of visceral organs of birds were not significantly (P > 0.05) influenced by genotype and housing density. The economic analysis revealed that higher gross margin was recorded for Arbor Acres compared to Anak Titans (euro 2.76 versus euro 2.19; P < 0.05, respectively). Conversely, stocking rate did not exert any influence (P > 0.05) on profit margin. Genotype x stocking density interaction effect was significant for some of the carcass indices investigated. It is concluded that under sub-humid conditions of a tropical environment, the use of Arbor Acre genetic type as well as a placement density of 14.3 birds/m(2) appeared to be more profitable.

  20. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  1. Longitudinal changes in gestational weight gain and the association with intrauterine fetal growth.

    PubMed

    Hinkle, Stefanie N; Johns, Alicia M; Albert, Paul S; Kim, Sungduk; Grantz, Katherine L

    2015-07-01

    Total pregnancy weight gain has been associated with infant birthweight; however, most prior studies lacked repeat ultrasound measurements. Understanding of the longitudinal changes in maternal weight gain and intrauterine changes in fetal anthropometrics is limited. Prospective data from 1314 Scandinavian singleton pregnancies at high-risk for delivering small-for-gestational-age (SGA) were analyzed. Women had ≥1 (median 12) antenatal weight measurements. Ultrasounds were targeted at 17, 25, 33, and 37 weeks of gestation. Analyses involved a multi-step process. First, trajectories were estimated across gestation for maternal weight gain and fetal biometrics [abdominal circumference (AC, mm), biparietal diameter (BPD, mm), femur length (FL, mm), and estimated fetal weight (EFW, g)] using linear mixed models. Second, the association between maternal weight changes (per 5 kg) and corresponding fetal growth from 0 to 17, 17 to 28, and 28 to 37 weeks was estimated for each fetal parameter adjusting for prepregnancy body mass index, height, parity, chronic diseases, age, smoking, fetal sex, and weight gain up to the respective period as applicable. Third, the probability of fetal SGA, EFW <10th percentile, at the 3rd ultrasound was estimated across the spectrum of maternal weight gain rate by SGA status at the 2nd ultrasound. From 0 to 17 weeks, changes in maternal weight were most strongly associated with changes in BPD [β=0.51 per 5 kg (95%CI 0.26, 0.76)] and FL [β=0.46 per 5 kg (95%CI 0.26, 0.65)]. From 17 to 28 weeks, AC [β=2.92 per 5 kg (95%CI 1.62, 4.22)] and EFW [β=58.7 per 5 kg (95%CI 29.5, 88.0)] were more strongly associated with changes in maternal weight. Increased maternal weight gain was significantly associated with a reduced probability of intrauterine SGA; for a normal weight woman with SGA at the 2nd ultrasound, the probability of fetal SGA with a weight gain rate of 0.29 kg/w (10th percentile) was 59%, compared to 38% with a rate of 0.67 kg/w (90th percentile). Among women at high-risk for SGA, maternal weight gain was associated with fetal growth throughout pregnancy, but had a differential relationship with specific biometrics across gestation. For women with fetal SGA identified mid-pregnancy, increased antenatal weight gain was associated with a decreased probability of fetal SGA approximately 7 weeks later. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  3. [Effect of 50 Hz 1.8 mT sinusoidal electromagnetic fields on bone mineral density in growing rats].

    PubMed

    Gao, Yu-Hai; Zhou, Yan-Feng; Li, Shao-Feng; Li, Wen-Yuan; Xi, Hui-Rong; Yang, Fang-Fang; Chen, Ke-Ming

    2017-12-25

    To study effects of 50 Hz 1.8 mT sinusoidal electromagnetic fields (SEMFs) on bone mineral density (BMD) in SD rats. Thirty SD rats weighted(110±10) and aged 1 month were randomly divided into control group and electromagnetic field group, 15 in each group. Normal control group of 50 Hz 0 mT density and sinusoidal electromagnetic field group of 50 Hz 1.8 mT were performed respectively with 1.5 h/d and weighted weight once a week, and observed food-intake. Rats were anesthesia by intraperitoneal injection and dual energy X-ray absorptiometry were used to detect bone density of whole body, and detected bone density of femur and vertebral body. Osteocalcin and tartrate-resistant acid phosphatase 5b were detected by ELSA; weighted liver, kidney and uterus to calculate purtenance index, then detected pathologic results by HE. Compared with control group, there was no significant change in weight every week, food-intake every day; no obvious change of bone density of whole body at 2 and 4 weeks, however bone density of whole body, bone density of excised femur and vertebra were increased at 6 weeks. Expression of OC was increased, and TRACP 5b expression was decreased. No change of HE has been observed in liver, kidney and uterus and organic index. 50 Hz 1.8 mT sinusoidal electromagnetic fields could improve bone formation to decrease relevant factors of bone absorbs, to improve peak bone density of young rats, in further provide a basis for clinical research electromagnetic fields preventing osteoporosis foundation.

  4. Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.

    PubMed

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-11-23

    The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.

  5. A randomized controlled trial for obesity and binge eating disorder: Low-energy-density dietary counseling and cognitive behavioral therapy

    PubMed Central

    Masheb, Robin M.; Grilo, Carlos M.; Rolls, Barbara J.

    2011-01-01

    The present study examined a dietary approach – lowering energy density – for producing weight loss in obese patients with binge eating disorder (BED) who also received cognitive-behavioral therapy (CBT) to address binge eating. Fifty consecutive participants were randomly assigned to either a six-month individual treatment of CBT plus a low-Energy-Density diet (CBT+ED) or CBT plus General Nutrition counseling not related to weight loss (CBT+GN). Assessments occurred at six- and twelve-months. Eighty-six percent of participants completed treatment, and of these, 30% achieved at least a 5% weight loss with rates of binge remission ranging from 55–75%. The two treatments did not differ significantly in weight loss or binge remission outcomes. Significant improvements were found for key dietary and metabolic outcomes, with CBT+ED producing significantly better dietary outcomes on energy density, and fruit and vegetable consumption, than CBT+GN. Reductions in energy density and weight loss were significantly associated providing evidence for the specificity of the treatment effect. These favorable outcomes, and that CBT+ED was significantly better at reducing energy density and increasing fruit and vegetable consumption compared to CBT+GN, suggest that low-energy-density dietary counseling has promise as an effective method for enhancing CBT for obese individuals with BED. PMID:22005587

  6. Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards

    PubMed Central

    Ackermann, John F.; Landy, Michael S.

    2014-01-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822

  7. Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.

    PubMed

    Ackermann, John F; Landy, Michael S

    2015-02-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.

  8. Utility of inverse probability weighting in molecular pathological epidemiology.

    PubMed

    Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji

    2018-04-01

    As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.

  9. From instinct to intellect: the challenge of maintaining healthy weight in the modern world.

    PubMed

    Peters, J C; Wyatt, H R; Donahoo, W T; Hill, J O

    2002-05-01

    The global obesity epidemic is being driven in large part by a mismatch between our environment and our metabolism. Human physiology developed to function within an environment where high levels of physical activity were needed in daily life and food was inconsistently available. For most of mankind's history, physical activity has 'pulled' appetite so that the primary challenge to the physiological system for body weight control was to obtain sufficient energy intake to prevent negative energy balance and body energy loss. The current environment is characterized by a situation whereby minimal physical activity is required for daily life and food is abundant, inexpensive, high in energy density and widely available. Within this environment, food intake 'pushes' the system, and the challenge to the control system becomes to increase physical activity sufficiently to prevent positive energy balance. There does not appear to be a strong drive to increase physical activity in response to excess energy intake and there appears to be only a weak adaptive increase in resting energy expenditure in response to excess energy intake. In the modern world, the prevailing environment constitutes a constant background pressure that promotes weight gain. We propose that the modern environment has taken body weight control from an instinctual (unconscious) process to one that requires substantial cognitive effort. In the current environment, people who are not devoting substantial conscious effort to managing body weight are probably gaining weight. It is unlikely that we would be able to build the political will to undo our modern lifestyle, to change the environment back to one in which body weight control again becomes instinctual. In order to combat the growing epidemic we should focus our efforts on providing the knowledge, cognitive skills and incentives for controlling body weight and at the same time begin creating a supportive environment to allow better management of body weight.

  10. Responsiveness-informed multiple imputation and inverse probability-weighting in cohort studies with missing data that are non-monotone or not missing at random.

    PubMed

    Doidge, James C

    2018-02-01

    Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.

  11. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  12. Effects of stocking density, light and perches on broiler growth.

    PubMed

    Velo, Ramón; Ceular, Angel

    2017-02-01

    The aim of this study was to determine the effect of stocking density, light intensity and light color on broiler growth. The experiment consisted of four 35-day phases during each of which 320 chickens were surveyed. The research was performed at stocking densities of four and six birds/m 2 . Illuminances of 15 and 30 lx were obtained through commercial lamps with 4000 K and 6000 K color temperatures. Lighting was used 17 h a day, between 06.00 and 23.00 hours (17 L:7 D). The results showed a decrease in body, carcass, breast and thighs weight (P < 0.05) with the increase in stocking density. Body weight decreased by 10.5% and carcass weight decreased by 9.4% at six birds/m 2 stocking density. Contrastingly, no differences were found for the tested light colors. Increasing illuminance from 15 to 30 lx caused a 1.9% decrease in body weight. The analysis of the effect of perches revealed that using perches significantly increased body (2.5%) and breast weight (11.8%). The interactions between light intensity or color and stocking density and between light intensity and light color were analyzed. © 2016 Japanese Society of Animal Science.

  13. Theoretical and material studies of thin-film electroluminescent devices

    NASA Technical Reports Server (NTRS)

    Summers, C. J.

    1989-01-01

    Thin-film electroluminescent (TFEL) devices are studied for a possible means of achieving a high resolution, light weight, compact video display panel for computer terminals or television screens. The performance of TFEL devices depends upon the probability of an electron impact exciting a luminescent center which in turn depends upon the density of centers present in the semiconductor layer, the possibility of an electron achieving the impact excitation threshold energy, and the collision cross section itself. Efficiency of such a device is presently very poor. It can best be improved by increasing the number of hot electrons capable of impact exciting a center. Hot electron distributions and a method for increasing the efficiency and brightness of TFEL devices (with the additional advantage of low voltage direct current operation) are investigated.

  14. The propagation of Lamb waves in multilayered plates: phase-velocity measurement

    NASA Astrophysics Data System (ADS)

    Grondel, Sébastien; Assaad, Jamal; Delebarre, Christophe; Blanquet, Pierrick; Moulin, Emmanuel

    1999-05-01

    Owing to the dispersive nature and complexity of the Lamb waves generated in a composite plate, the measurement of the phase velocities by using classical methods is complicated. This paper describes a measurement method based upon the spectrum-analysis technique, which allows one to overcome these problems. The technique consists of using the fast Fourier transform to compute the spatial power-density spectrum. Additionally, weighted functions are used to increase the probability of detecting the various propagation modes. Experimental Lamb-wave dispersion curves of multilayered plates are successfully compared with the analytical ones. This technique is expected to be a useful way to design composite parts integrating ultrasonic transducers in the field of health monitoring. Indeed, Lamb waves and particularly their velocities are very sensitive to defects.

  15. Method and composition for molding low density desiccant syntactic foam articles

    DOEpatents

    Lula, James W.; Schicker, James R.

    1984-01-01

    A method and a composition are provided for molding low density desiccant syntactic foam articles. A low density molded desiccant article may be made as a syntactic foam by blending a thermosetting resin, microspheres and molecular sieve desiccant powder, molding and curing. Such articles have densities of 0.2-0.9 g/cc, moisture capacities of 1-12% by weight, and can serve as light weight structural supports.

  16. Very Large Eddy Simulations of a Jet-A Spray Reacting Flow in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar DWFDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2013-01-01

    This paper presents the very large eddy simulations (VLES) of a Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar DWFDF method, in which DWFDF is defined as the density weighted time filtered fine grained probability density function. The flow field is calculated by using the time filtered compressible Navier-Stokes equations (TFNS) with nonlinear subscale turbulence models, and when the Eulerian scalar DWFDF method is invoked, the energy and species mass fractions are calculated by solving the equation of DWFDF. A nonlinear subscale model for closing the convection term of the Eulerian scalar DWFDF equation is used and will be briefly described in this paper. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar DWFDF method in both improving the simulation quality and maintaining economic computing cost are observed.

  17. A mass-density model can account for the size-weight illusion

    PubMed Central

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception. PMID:29447183

  18. A mass-density model can account for the size-weight illusion.

    PubMed

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  19. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  20. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  1. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  2. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  3. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  4. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  5. Neural response to reward anticipation under risk is nonlinear in probabilities.

    PubMed

    Hsu, Ming; Krajbich, Ian; Zhao, Chen; Camerer, Colin F

    2009-02-18

    A widely observed phenomenon in decision making under risk is the apparent overweighting of unlikely events and the underweighting of nearly certain events. This violates standard assumptions in expected utility theory, which requires that expected utility be linear (objective) in probabilities. Models such as prospect theory have relaxed this assumption and introduced the notion of a "probability weighting function," which captures the key properties found in experimental data. This study reports functional magnetic resonance imaging (fMRI) data that neural response to expected reward is nonlinear in probabilities. Specifically, we found that activity in the striatum during valuation of monetary gambles are nonlinear in probabilities in the pattern predicted by prospect theory, suggesting that probability distortion is reflected at the level of the reward encoding process. The degree of nonlinearity reflected in individual subjects' decisions is also correlated with striatal activity across subjects. Our results shed light on the neural mechanisms of reward processing, and have implications for future neuroscientific studies of decision making involving extreme tails of the distribution, where probability weighting provides an explanation for commonly observed behavioral anomalies.

  6. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  7. Plant Density Effect on Grain Number and Weight of Two Winter Wheat Cultivars at Different Spikelet and Grain Positions

    PubMed Central

    Ni, Yingli; Zheng, Mengjing; Yang, Dongqing; Jin, Min; Chen, Jin; Wang, Zhenlin; Yin, Yanping

    2016-01-01

    In winter wheat, grain development is asynchronous. The grain number and grain weight vary significantly at different spikelet and grain positions among wheat cultivars grown at different plant densities. In this study, two winter wheat (Triticum aestivum L.) cultivars, ‘Wennong6’ and ‘Jimai20’, were grown under four different plant densities for two seasons, in order to study the effect of plant density on the grain number and grain weight at different spikelet and grain positions. The results showed that the effects of spikelet and grain positions on grain weight varied with the grain number of spikelets. In both cultivars, the single-grain weight of the basal and middle two-grain spikelets was higher at the 2nd grain position than that at the 1st grain position, while the opposite occurred in the top two-grain spikelets. In the three-grain spikelets, the distribution of the single-grain weight was different between cultivars. In the four-grain spikelets of Wennong6, the single-grain weight was the highest at the 2nd grain position, followed by the 1st, 3rd, and 4th grain positions. Regardless of the spikelet and grain positions, the single-grain weight was the highest at the 1st and 2nd grain positions and the lowest at the 3rd and 4th grain positions. Overall, plant density affected the yield by controlling the seed-setting characteristics of the tiller spike. Therefore, wheat yield can be increased by decreasing the sterile basal and top spikelets and enhancing the grain weight at the 3rd and 4th grain positions, while maintaining it at the 1st and 2nd grain positions on the spikelet. PMID:27171343

  8. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  9. Ecology of a Maryland population of black rat snakes (Elaphe o. obsoleta)

    USGS Publications Warehouse

    Stickel, L.F.; Stickel, W.H.; Schmid, F.C.

    1980-01-01

    Behavior, growth and age of black rat snakes under natural conditions were investigated by mark-recapture methods at the Patuxent Wildlife Research Center for 22 years (1942-1963), with limited observations for 13 more years (1964-1976). Over the 35-year period, 330 snakes were recorded a total of 704 times. Individual home ranges remained stable for many years; male ranges averaged at least 600 m in diam and female ranges at least 500 m, each including a diversity of habitats, evidenced also in records of foods. Population density was low, probably less than 0.5 snake/ha. Peak activity of both sexes was in May and June, with a secondary peak in September. Large trees in the midst of open areas appeared to serve a significant functional role in the behavioral life pattern of the snake population. Male combat was observed three times in the field. Male snakes grew more rapidly than females, attained larger sizes and lived longer. Some individuals of both sexes probably lived 20 years or more. Weight-length relationships changed as the snakes grew and developed heavier bodies in proportion to length. Growth apparently continued throughout life. Some individuals, however, both male and female, stopped growing for periods of I or 2 years and then resumed, a condition probably related to poor health, suggested by skin ailments.

  10. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  11. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  12. Money, kisses, and electric shocks: on the affective psychology of risk.

    PubMed

    Rottenstreich, Y; Hsee, C K

    2001-05-01

    Prospect theory's S-shaped weighting function is often said to reflect the psychophysics of chance. We propose an affective rather than psychophysical deconstruction of the weighting function resting on two assumptions. First, preferences depend on the affective reactions associated with potential outcomes of a risky choice. Second, even with monetary values controlled, some outcomes are relatively affect-rich and others relatively affect-poor. Although the psychophysical and affective approaches are complementary, the affective approach has one novel implication: Weighting functions will be more S-shaped for lotteries involving affect-rich than affect-poor outcomes. That is, people will be more sensitive to departures from impossibility and certainty but less sensitive to intermediate probability variations for affect-rich outcomes. We corroborated this prediction by observing probability-outcome interactions: An affect-poor prize was preferred over an affect-rich prize under certainty, but the direction of preference reversed under low probability. We suggest that the assumption of probability-outcome independence, adopted by both expected-utility and prospect theory, may hold across outcomes of different monetary values, but not different affective values.

  13. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  14. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  15. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  16. Feedback produces divergence from prospect theory in descriptive choice.

    PubMed

    Jessup, Ryan K; Bishara, Anthony J; Busemeyer, Jerome R

    2008-10-01

    A recent study demonstrated that individuals making experience-based choices underweight small probabilities, in contrast to the overweighting observed in a typical descriptive paradigm. We tested whether trial-by-trial feedback in a repeated descriptive paradigm would engender choices more correspondent with experiential or descriptive paradigms. The results of a repeated gambling task indicated that individuals receiving feedback underweighted small probabilities, relative to their no-feedback counterparts. These results implicate feedback as a critical component during the decision-making process, even in the presence of fully specified descriptive information. A model comparison at the individual-subject level suggested that feedback drove individuals' decision weights toward objective probability weighting.

  17. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  18. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  19. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  20. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  1. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  2. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  3. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  4. Gray wolf density and its association with weights and hematology of pups from 1970 to 1988

    USGS Publications Warehouse

    DelGiudice, Glenn D.; Mech, L. David; Seal, U.S.

    1991-01-01

    We examined weights and hematologic profiles of gray wolf (Canis lupus) pups and the associated wolf density in the east-central Superior National Forest of northeastern Minnesota (USA) during 1970 to 1988. We collected weight and hematologic data from 117 pups (57 females, 60 males) during 1 September to 22 November each year. The wolf density (wolves/800 km2) trend was divided into three phases: high (72 ± 7), 1970 to 1975; medium (44 ± 2), 1976 to 1983; and low (27 ± 2), 1984 to 1988. Wolf numbers declined (P = 0.0001) 39 and 63% from 1970 to 1975 to 1976 to 1983 and from 1970 to 1975 to 1984 to 1988, respectively. Weight was similar between male and female pups and did not vary as wolf density changed. Mean hemoglobin (P = 0.04), red (P = 0.0001) and white blood cells (P = 0.002), mean corpuscular volume, mean corpuscular hemoglobin concentration and mean corpuscular hemoglobin (P = 0.0001) did differ among the multi-annual phases of changing wolf density. Weight and hematologic data also were compared to values from captive wolf pups. The high, but declining wolf density was associated with macrocytic, normochromic anemia in wolf pups, whereas the lowest density coincided with a hypochromic anemia. Although hematologic values show promise for assessing wolf pup condition and wolf population status, they must be used cautiously until data are available from other populations.

  5. Anorexia Nervosa and Bone

    PubMed Central

    Misra, Madhusmita; Klibanski, Anne

    2014-01-01

    Anorexia nervosa (AN) is a condition of severe low weight that is associated with low bone mass, impaired bone structure and reduced bone strength, all of which contribute to increased fracture risk., Adolescents with AN have decreased rates of bone accrual compared with normal-weight controls, raising addition concerns of suboptimal peak bone mass and future bone health in this age group. Changes in lean mass and compartmental fat depots, hormonal alterations secondary to nutritional factors contribute to impaired bone metabolism in AN. The best strategy to improve bone density is to regain weight and menstrual function. Oral estrogen-progesterone combinations are not effective in increasing bone density in adults or adolescents with AN, and transdermal testosterone replacement is not effective in increasing bone density in adult women with AN. However, physiologic estrogen replacement as transdermal estradiol with cyclic progesterone does increase bone accrual rates in adolescents with AN to approximate that in normal-weight controls, leading to a maintenance of bone density Z-scores. A recent study has shown that risedronate increases bone density at the spine and hip in adult women with AN. However, bisphosphonates should be used with great caution in women of reproductive age given their long half-life and potential for teratogenicity, and should be considered only in patients with low bone density and clinically significant fractures when non-pharmacological therapies for weight gain are ineffective. Further studies are necessary to determine the best therapeutic strategies for low bone density in AN. PMID:24898127

  6. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  7. Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.

    PubMed

    Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia

    2017-04-01

    Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.

  8. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  9. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  10. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  11. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  12. Academic achievement of twins and singletons in early adulthood: Taiwanese cohort study.

    PubMed

    Tsou, Meng-Ting; Tsou, Meng-Wen; Wu, Ming-Ping; Liu, Jin-Tan

    2008-07-21

    To examine the long term effects of low birth weight on academic achievements in twins and singletons and to determine whether the academic achievement of twins in early adulthood is inferior to that of singletons. Cohort study. Taiwanese nationwide register of academic outcome. A cohort of 218 972 singletons and 1687 twins born in Taiwan, 1983-5. College attendance and test scores in the college joint entrance examinations. After adjustment for birth weight, gestational age, birth order, and sex and the sociodemographic characteristics of the parents, twins were found to have significantly lower mean test scores than singletons in Chinese, mathematics, and natural science, as well as a 2.2% lower probability of attending college. Low birthweight twins had an 8.5% lower probability of college attendance than normal weight twins, while low birthweight singletons had only a 3.2% lower probability. The negative effects of low birth weight on the test scores in English and mathematics were substantially greater for twins than for singletons. The twin pair analysis showed that the association between birth weight and academic achievement scores, which existed for opposite sex twin pairs, was not discernible for same sex twin pairs, indicating that birth weight might partly reflect other underlying genetic variations. These data support the proposition that twins perform less well academically than singletons. Low birth weight has a negative association with subsequent academic achievement in early adulthood, with the effect being stronger for twins than for singletons. The association between birth weight and academic performance might be partly attributable to genetic factors.

  13. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-11-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  14. Economic injury level of the psyllid, Agonoscena pistaciae, on Pistachio, Pistacia vera cv. Ohadi.

    PubMed

    Reza Hassani, Mohammad; Nouri-Ganbalani, Gadir; Izadi, Hamzeh; Shojai, Mahmoud; Basirat, Mehdi

    2009-01-01

    The pistachio psylla, Agonoscena pistaciae Burckhardt and Lauterer (Hemiptera: Psyllidae) is a major pest of pistachio trees, Pistacia vera L. (Sapindalis: Anacardiaceae) throughout pistachio-producing regions in Iran. Different density levels of A. pistaciae nymphs were maintained on pistachio trees by different insecticide dosages to evaluate the relationship between nymph density and yield loss (weight of 1000 nuts). Psylla nymph densities were monitored weekly by counting nymphs on pistachio terminal leaflets. There was a significant reduction in weight of 1000 nuts as seasonal averages of nymphs increased. Regression analysis was used to determine the relationship between nymph density and weight of 1000 nuts. The economic injury levels varied as a function of market values, management costs, insecticide efficiency and yield loss rate and ranged from 7.7 to 30.7 nymphal days per terminal leaflet, based on weight of 1000 nuts.

  15. Economic Injury Level of the Psyllid, Agonoscena pistaciae, on Pistachio, Pistacia vera cv. Ohadi

    PubMed Central

    Reza Hassani, Mohammad; Nouri-Ganbalani, Gadir; Izadi, Hamzeh; Basirat, Mehdi

    2009-01-01

    The pistachio psylla, Agonoscena pistaciae Burckhardt and Lauterer (Hemiptera: Psyllidae) is a major pest of pistachio trees, Pistacia vera L. (Sapindalis: Anacardiaceae) throughout pistachio-producing regions in Iran. Different density levels of A. pistaciae nymphs were maintained on pistachio trees by different insecticide dosages to evaluate the relationship between nymph density and yield loss (weight of 1000 nuts). Psylla nymph densities were monitored weekly by counting nymphs on pistachio terminal leaflets. There was a significant reduction in weight of 1000 nuts as seasonal averages of nymphs increased. Regression analysis was used to determine the relationship between nymph density and weight of 1000 nuts. The economic injury levels varied as a function of market values, management costs, insecticide efficiency and yield loss rate and ranged from 7.7 to 30.7 nymphal days per terminal leaflet, based on weight of 1000 nuts. PMID:19619034

  16. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  17. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  18. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  19. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  20. The effect of low-density broiler breeder diets on performance and immune status of their offspring.

    PubMed

    Enting, H; Boersma, W J A; Cornelissen, J B W J; van Winden, S C L; Verstegen, M W A; van der Aar, P J

    2007-02-01

    Effects of low-density broiler breeder diets on offspring performance and mortality were studied using 2,100 female and 210 male Cobb 500 breeders. Breeder treatments involved 4 experimental groups and a control group with normal density diets (ND, 2,600 kcal of AME/kg during rearing and 2,800 kcal of AME/kg during laying). In treatment 2, nutrient densities were decreased by 12% (LD12) and 11% (LD11) during the rearing and laying periods, respectively, whereas in treatment 3, nutrient densities were decreased by 23% (LD23) and 21% (LD21) during the rearing and laying periods, respectively. The nutrient density in these treatments was decreased through inclusion of palm kernel meal, wheat bran, wheat gluten feed, and sunflower seed meal in the diets. Treatment 4 included diets with the same nutrient densities as in treatment 2 but included oats and sugar beet pulp (LD12(OP) and LD11(OP)). In treatment 5, the same low-density diet was given to the breeders as in treatment 2 during the rearing period, but it was followed by a normal density diet during the laying period (LD12-ND). Treatments were applied from 4 to 60 wk of age. On low-density diets, offspring showed an increased 1-d-old weight. As compared with offspring of breeders that received ND, the d 38 live weight of chickens from 29-wk-old breeders fed LD11 was improved. Mortality was reduced in offspring from 60-wk-old parent stock given low-density diets. The IgM titers in 35-d-old offspring from eggs with a lower-than-average weight were reduced when 29-wk-old broiler breeders were fed low-density diets. In offspring from eggs with a higher-than-average weight from 60-wk-old parent stock given LD11 or LD21 diets, IgM titers were higher compared with ND. It was concluded that low-density broiler breeder diets can improve offspring growth rates, reduce mortality, and reduce or increase immune responses, depending on breeder age and egg weight.

  1. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  2. PYFLOW 2.0. A new open-source software for quantifying the impact and depositional properties of dilute pyroclastic density currents

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2017-04-01

    Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46

  3. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  4. Spatial vent opening probability map of El Hierro Island (Canary Islands, Spain)

    NASA Astrophysics Data System (ADS)

    Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro

    2013-04-01

    The assessment of the probable spatial distribution of new eruptions is useful to manage and reduce the volcanic risk. It can be achieved in different ways, but it becomes especially hard when dealing with volcanic areas less studied, poorly monitored and characterized by a low frequent activity, as El Hierro. Even though it is the youngest of the Canary Islands, before the 2011 eruption in the "Las Calmas Sea", El Hierro had been the least studied volcanic Island of the Canaries, with more historically devoted attention to La Palma, Tenerife and Lanzarote. We propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the mathematical analysis of the volcano-structural data collected mostly on the Island and, secondly, on the submerged part of the volcano, up to a distance of ~10-20 km from the coast. The volcano-structural data were collected through new fieldwork measurements, bathymetric information, and analysis of geological maps, orthophotos and aerial photographs. They have been divided in different datasets and converted into separate and weighted probability density functions, which were then included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. Future eruptive events on El Hierro is mainly concentrated on the rifts zones, extending also beyond the shoreline. The major probabilities to host new eruptions are located on the distal parts of the South and West rifts, with the highest probability reached in the south-western area of the West rift. High probabilities are also observed in the Northeast and South rifts, and the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency plans and civil defence actions.

  5. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  6. Use of probabilistic weights to enhance linear regression myoelectric control

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  7. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  8. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  9. Ghrelin and leptin modulate the feeding behaviour of the hawksbill turtle Eretmochelys imbricata during nesting season

    PubMed Central

    Goldberg, Daphne Wrobel; Leitão, Santiago Alonso Tobar; Godfrey, Matthew H.; Lopez, Gustave Gilles; Santos, Armando José Barsante; Neves, Fabiana Alves; de Souza, Érica Patrícia Garcia; Moura, Anibal Sanchez; Bastos, Jayme da Cunha; Bastos, Vera Lúcia Freire da Cunha

    2013-01-01

    Female sea turtles have rarely been observed foraging during the nesting season. This suggests that prior to their migration to nesting beaches the females must store sufficient energy and nutrients at their foraging grounds and must be physiologically capable of undergoing months without feeding. Leptin (an appetite-suppressing protein) and ghrelin (a hunger-stimulating peptide) affect body weight by influencing energy intake in all vertebrates. We investigated the levels of these hormones and other physiological and nutritional parameters in nesting hawksbill sea turtles in Rio Grande do Norte State, Brazil, by collecting consecutive blood samples from 41 turtles during the 2010–2011 and 2011–2012 reproductive seasons. We found that levels of serum leptin decreased over the nesting season, which potentially relaxed suppression of food intake and stimulated females to begin foraging either during or after the post-nesting migration. Concurrently, we recorded an increasing trend in ghrelin, which may have stimulated food intake towards the end of the nesting season. Both findings are consistent with the prediction that post-nesting females will begin to forage, either during or immediately after their post-nesting migration. We observed no seasonal trend for other physiological parameters (values of packed cell volume and serum levels of alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, γ-glutamyl transferase, low-density lipoprotein, and high-density lipoprotein). The observed downward trends in general serum biochemistry levels were probably due to the physiological challenge of vitellogenesis and nesting in addition to limited energy resources and probable fasting. PMID:27293600

  10. Dispersion and Lifetime of the SO2 Cloud from the August 2008 Kasatochi Eruption

    NASA Technical Reports Server (NTRS)

    Krotkov, N. A.; Schoeberl, M. R.; Morris, G. A.; Carn, S.; Yang, K.

    2010-01-01

    Hemispherical dispersion of the SO2 cloud from the August 2008 Kasatochi eruption is analyzed using satellite data from the Ozone Monitoring Instrument (OMI) and the Goddard Trajectory Model (GTM). The operational OMI retrievals underestimate the total SO2 mass by 20-30% on 8-11 August, as compared with more accurate offline Extended Iterative Spectral Fit (EISF) retrievals, but the error decreases with time due to plume dispersion and a drop in peak SO2 column densities. The GTM runs were initialized with and compared to the operational OMI SO2 data during early plume dispersion to constrain SO2 plume heights and eruption times. The most probable SO2 heights during initial dispersion are estimated to be 10-12 km, in agreement with direct height retrievals using EISF algorithm and IR measurements. Using these height constraints a forward GTM run was initialized on 11 August to compare with the month-long Kasatochi SO2 cloud dispersion patterns. Predicted volcanic cloud locations generally agree with OMI observations, although some discrepancies were observed. Operational OMI SO2 burdens were refined using GTM-predicted mass-weighted probability density height distributions. The total refined SO2 mass was integrated over the Northern Hemisphere to place empirical constraints on the SO2 chemical decay rate. The resulting lower limit of the Kasatochi SO2 e-folding time is approx.8-9 days. Extrapolation of the exponential decay back in time yields an initial erupted SO2 mass of approx.2.2 Tg on 8 August, twice as much as the measured mass on that day.

  11. Ghrelin and leptin modulate the feeding behaviour of the hawksbill turtle Eretmochelys imbricata during nesting season.

    PubMed

    Goldberg, Daphne Wrobel; Leitão, Santiago Alonso Tobar; Godfrey, Matthew H; Lopez, Gustave Gilles; Santos, Armando José Barsante; Neves, Fabiana Alves; de Souza, Érica Patrícia Garcia; Moura, Anibal Sanchez; Bastos, Jayme da Cunha; Bastos, Vera Lúcia Freire da Cunha

    2013-01-01

    Female sea turtles have rarely been observed foraging during the nesting season. This suggests that prior to their migration to nesting beaches the females must store sufficient energy and nutrients at their foraging grounds and must be physiologically capable of undergoing months without feeding. Leptin (an appetite-suppressing protein) and ghrelin (a hunger-stimulating peptide) affect body weight by influencing energy intake in all vertebrates. We investigated the levels of these hormones and other physiological and nutritional parameters in nesting hawksbill sea turtles in Rio Grande do Norte State, Brazil, by collecting consecutive blood samples from 41 turtles during the 2010-2011 and 2011-2012 reproductive seasons. We found that levels of serum leptin decreased over the nesting season, which potentially relaxed suppression of food intake and stimulated females to begin foraging either during or after the post-nesting migration. Concurrently, we recorded an increasing trend in ghrelin, which may have stimulated food intake towards the end of the nesting season. Both findings are consistent with the prediction that post-nesting females will begin to forage, either during or immediately after their post-nesting migration. We observed no seasonal trend for other physiological parameters (values of packed cell volume and serum levels of alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, γ-glutamyl transferase, low-density lipoprotein, and high-density lipoprotein). The observed downward trends in general serum biochemistry levels were probably due to the physiological challenge of vitellogenesis and nesting in addition to limited energy resources and probable fasting.

  12. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  13. Effects of low-carbohydrate vs low-fat diets on weight loss and cardiovascular risk factors: a meta-analysis of randomized controlled trials.

    PubMed

    Nordmann, Alain J; Nordmann, Abigail; Briel, Matthias; Keller, Ulrich; Yancy, William S; Brehm, Bonnie J; Bucher, Heiner C

    2006-02-13

    Low-carbohydrate diets have become increasingly popular for weight loss. However, evidence from individual trials about benefits and risks of these diets to achieve weight loss and modify cardiovascular risk factors is preliminary. We used the Cochrane Collaboration search strategy to identify trials comparing the effects of low-carbohydrate diets without restriction of energy intake vs low-fat diets in individuals with a body mass index (calculated as weight in kilograms divided by the square of height in meters) of at least 25. Included trials had to report changes in body weight in intention-to-treat analysis and to have a follow-up of at least 6 months. Two reviewers independently assessed trial eligibility and quality of randomized controlled trials. Five trials including a total of 447 individuals fulfilled our inclusion criteria. After 6 months, individuals assigned to low-carbohydrate diets had lost more weight than individuals randomized to low-fat diets (weighted mean difference, -3.3 kg; 95% confidence interval [CI], -5.3 to -1.4 kg). This difference was no longer obvious after 12 months (weighted mean difference, -1.0 kg; 95% CI, -3.5 to 1.5 kg). There were no differences in blood pressure. Triglyceride and high-density lipoprotein cholesterol values changed more favorably in individuals assigned to low-carbohydrate diets (after 6 months, for triglycerides, weighted mean difference, -22.1 mg/dL [-0.25 mmol/L]; 95% CI, -38.1 to -5.3 mg/dL [-0.43 to -0.06 mmol/L]; and for high-density lipoprotein cholesterol, weighted mean difference, 4.6 mg/dL [0.12 mmol/L]; 95% CI, 1.5-8.1 mg/dL [0.04-0.21 mmol/L]), but total cholesterol and low-density lipoprotein cholesterol values changed more favorably in individuals assigned to low-fat diets (weighted mean difference in low-density lipoprotein cholesterol after 6 months, 5.4 mg/dL [0.14 mmol/L]; 95% CI, 1.2-10.1 mg/dL [0.03-0.26 mmol/L]). Low-carbohydrate, non-energy-restricted diets appear to be at least as effective as low-fat, energy-restricted diets in inducing weight loss for up to 1 year. However, potential favorable changes in triglyceride and high-density lipoprotein cholesterol values should be weighed against potential unfavorable changes in low-density lipoprotein cholesterol values when low-carbohydrate diets to induce weight loss are considered.

  14. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  15. Decision analysis with cumulative prospect theory.

    PubMed

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  16. Understanding PSA and its derivatives in prediction of tumor volume: Addressing health disparities in prostate cancer risk stratification.

    PubMed

    Chinea, Felix M; Lyapichev, Kirill; Epstein, Jonathan I; Kwon, Deukwoo; Smith, Paul Taylor; Pollack, Alan; Cote, Richard J; Kryvenko, Oleksandr N

    2017-03-28

    To address health disparities in risk stratification of U.S. Hispanic/Latino men by characterizing influences of prostate weight, body mass index, and race/ethnicity on the correlation of PSA derivatives with Gleason score 6 (Grade Group 1) tumor volume in a diverse cohort. Using published PSA density and PSA mass density cutoff values, men with higher body mass indices and prostate weights were less likely to have a tumor volume <0.5 cm3. Variability across race/ethnicity was found in the univariable analysis for all PSA derivatives when predicting for tumor volume. In receiver operator characteristic analysis, area under the curve values for all PSA derivatives varied across race/ethnicity with lower optimal cutoff values for Hispanic/Latino (PSA=2.79, PSA density=0.06, PSA mass=0.37, PSA mass density=0.011) and Non-Hispanic Black (PSA=3.75, PSA density=0.07, PSA mass=0.46, PSA mass density=0.008) compared to Non-Hispanic White men (PSA=4.20, PSA density=0.11 PSA mass=0.53, PSA mass density=0.014). We retrospectively analyzed 589 patients with low-risk prostate cancer at radical prostatectomy. Pre-operative PSA, patient height, body weight, and prostate weight were used to calculate all PSA derivatives. Receiver operating characteristic curves were constructed for each PSA derivative per racial/ethnic group to establish optimal cutoff values predicting for tumor volume ≥0.5 cm3. Increasing prostate weight and body mass index negatively influence PSA derivatives for predicting tumor volume. PSA derivatives' ability to predict tumor volume varies significantly across race/ethnicity. Hispanic/Latino and Non-Hispanic Black men have lower optimal cutoff values for all PSA derivatives, which may impact risk assessment for prostate cancer.

  17. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  18. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  19. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  20. Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.

    PubMed

    Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro

    2016-09-30

    In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.

  1. Trends in Canadian Birth Weights, 1971 to 1989

    PubMed Central

    Wadhera, S.; Millar, W. J.; Nimrod, Carl

    1992-01-01

    This paper outlines levels and trends in birth weights of singleton birth weights of singleton births in Canada between 1971 and 1989. It relates these birth weights to maternal age, marital status, and parity and to gestational age. From 1971 to 1989, the median birth weight of all singletons increased by 104g, or 3.1%. The proportion of low birth weight babies declined, probably contributing to improved infant mortality rates. PMID:21221364

  2. Objectively measured walkability and active transport and weight-related outcomes in adults: a systematic review.

    PubMed

    Grasser, Gerlinde; Van Dyck, Delfien; Titze, Sylvia; Stronegger, Willibald

    2013-08-01

    The aim of this study was to investigate which GIS-based measures of walkability (density, land-use mix, connectivity and walkability indexes) in urban and suburban neighbourhoods are used in research and which of them are consistently associated with walking and cycling for transport, overall active transportation and weight-related measures in adults. A systematic review of English publications using PubMed, Science Direct, Active Living Research Literature Database, the Transportation Research Information Service and reference lists was conducted. The search terms utilised were synonyms for GIS in combination with synonyms for the outcomes. Thirty-four publications based on 19 different studies were eligible. Walkability measures such as gross population density, intersection density and walkability indexes most consistently correlated with measures of physical activity for transport. Results on weight-related measures were inconsistent. More research is needed to determine whether walkability is an appropriate measure for predicting weight-related measures and overall active transportation. As most of the consistent correlates, gross population density, intersection density and the walkability indexes have the potential to be used in planning and monitoring.

  3. Effect of stocking density on performances of juvenile turbot ( Scophthalmus maximus) in recirculating aquaculture systems

    NASA Astrophysics Data System (ADS)

    Li, Xian; Liu, Ying; Blancheton, Jean-Paul

    2013-05-01

    Limited information has been available about the influence of loading density on the performances of Scophthalmus maximus, especially in recirculating aquaculture systems (RAS). In this study, turbot (13.84±2.74 g; average weight±SD) were reared at four different initial densities (low 0.66, medium 1.26, sub-high 2.56, high 4.00 kg/m2) for 10 weeks in RAS at 23±1°C. Final densities were 4.67, 7.25, 14.16, and 17.47 kg/m2, respectively, which translate to 82, 108, 214, and 282 percent coverage of the tank bottom. Density had both negative and independent impacts on growth. The final mean weight, specific growth rate (SGR), and voluntary feed intake significantly decreased and the coefficient of variation (CV) of final body weight increased with increase in stocking density. The medium and sub-high density groups did not differ significantly in SGR, mean weight, CV, food conversion rate (FCR), feed intake, blood parameters, and digestive enzymes. The protease activities of the digestive tract at pH 7, 8.5, 9, and 10 were significantly higher for the highest density group, but tended to be lower (not significantly) at pH 4 and 8.5 for the lowest density group. The intensity of protease activity was inversely related to feed intake at the different densities. Catalase activity was higher (but not significantly) at the highest density, perhaps because high density started to induce an oxidative effect in turbot. In conclusion, turbot can be cultured in RAS at a density of less than 17.47 kg/m2. With good water quality and no feed limitation, initial density between 1.26 and 2.56 kg/m2 (final: 7.25 and 14.16 kg/m2) would not negatively affect the turbot cultured in RAS. For culture at higher density, multi-level feeding devices are suggested to ease feeding competition.

  4. Effect of 1 year of an intentional weight loss intervention on bone mineral density in type 2 diabetes: Results from the Look AHEAD randomized trial

    USDA-ARS?s Scientific Manuscript database

    Intentional weight loss is an important component of treatment for overweight patients with type 2 diabetes, but the effects on bone density are not known. We used data from the Look AHEAD trial to determine the impact of an intensive lifestyle weight loss intervention (ILI) compared with diabetes s...

  5. 46 CFR 162.050-9 - Test report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Relative density at 15 °C. (ii) Viscosity in centistokes at 37.8 °C. (iii) Flashpoint. (iv) Weight of ash content. (v) Weight of water content. (vi) Relative density at 15 °C. the of water used during testing and...

  6. 46 CFR 162.050-9 - Test report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Relative density at 15 °C. (ii) Viscosity in centistokes at 37.8 °C. (iii) Flashpoint. (iv) Weight of ash content. (v) Weight of water content. (vi) Relative density at 15 °C. the of water used during testing and...

  7. 46 CFR 162.050-9 - Test report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Relative density at 15 °C. (ii) Viscosity in centistokes at 37.8 °C. (iii) Flashpoint. (iv) Weight of ash content. (v) Weight of water content. (vi) Relative density at 15 °C. the of water used during testing and...

  8. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  9. Why anthropic reasoning cannot predict Lambda.

    PubMed

    Starkman, Glenn D; Trotta, Roberto

    2006-11-17

    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.

  10. On the Composition of Risk Preference and Belief

    ERIC Educational Resources Information Center

    Wakkar, Peter P.

    2004-01-01

    Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…

  11. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  12. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  13. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  14. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  15. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  16. Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.

    Treesearch

    W.Y. Pong; Dale R. Waddell; Lambert Michael B.

    1986-01-01

    Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...

  17. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  18. Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory

    PubMed Central

    Zhang, Lichuan; Wang, Tonghao; Xu, Demin

    2017-01-01

    Cooperative localization (CL) is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs). In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD) filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position. PMID:28991191

  19. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  20. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  1. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  2. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    PubMed

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p < 0.01). In comparison with equal term weighting, the performance was significantly improved by specific term weighting (p < 0.01) but not by generic term weighting. The clinical terms "Dysplastic nodule", "nodule of liver" and "equal density (isodense) lesion" were found the top three image findings associated with HCC in PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  3. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex.

    PubMed

    Steyn-Ross, Moira L; Steyn-Ross, D A; Sleigh, J W; Wilson, M T; Wilcocks, Lara C

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer's dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  4. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex

    NASA Astrophysics Data System (ADS)

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.; Wilson, M. T.; Wilcocks, Lara C.

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer’s dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  5. Chondromalacia patellae: diagnosis with MR imaging.

    PubMed

    McCauley, T R; Kier, R; Lynch, K J; Jokl, P

    1992-01-01

    Most previous studies of MR imaging for detection of chondromalacia have used T1-weighted images. We correlated findings on axial MR images of the knee with arthroscopic findings to determine MR findings of chondromalacia patellae on T2-weighted and proton density-weighted images. The study population included 52 patients who had MR examination of the knee with a 1.5-T unit and subsequent arthroscopy, which documented chondromalacia patellae in 29 patients and normal cartilage in 23. The patellar cartilage was assessed retrospectively for MR signal and contour characteristics. MR diagnosis based on the criteria of focal signal or focal contour abnormality on either the T2-weighted or proton density-weighted images yielded the highest correlation with the arthroscopic diagnosis of chondromalacia. When these criteria were used, patients with chondromalacia were detected with 86% sensitivity, 74% specificity, and 81% accuracy. MR diagnosis based on T2-weighted images alone was more sensitive and accurate than was diagnosis based on proton density-weighted images alone. In conclusion, most patients with chondromalacia patellae have focal signal or focal contour defects in the patellar cartilage on T2-weighted MR images. These findings are absent in most patients with arthroscopically normal cartilage.

  6. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  7. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  8. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  9. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  10. The structure of tracheobronchial mucins from cystic fibrosis and control patients.

    PubMed

    Gupta, R; Jentoft, N

    1992-02-15

    Tracheobronchial mucin samples from control and cystic fibrosis patients were purified by gel filtration chromatography on Sephacryl S-1000 and by density gradient centrifugation. Normal secretions contained high molecular weight (approximately 10(7] mucins, whereas the cystic fibrosis secretions contained relatively small amounts of high molecular weight mucin together with larger quantities of lower molecular weight mucin fragments. These probably represent products of protease digestion. Reducing the disulfide bonds in either the control or cystic fibrosis high molecular weight mucin fractions released subunits of approximately 2000 kDa. Treating these subunits with trypsin released glycopeptides of 300 kDa. Trypsin treatment of unreduced mucin also released fragments of 2000 kDa that could be converted into 300-kDa glycopeptides upon disulfide bond reduction. Thus, protease-susceptible linkages within these mucins must be cross-linked by disulfide bonds so that the full effects of proteolytic degradation of mucins remain cryptic until disulfide bonds are reduced. Since various combinations of protease treatment and disulfide bond reduction release either 2000- or 300-kDa fragments, these fragments must represent important elements of mucin structure. The high molecular weight fractions of cystic fibrosis mucins appear to be indistinguishable from control mucins. Their amino acid compositions are the same, and various combinations of disulfide bond reduction and protease treatment release products of identical size and amino acid composition. Sulfate and carbohydrate compositions did vary considerably from sample to sample, but the limited number of samples tested did not demonstrate a cystic fibrosis-specific pattern. Thus, tracheobronchial mucins from cystic fibrosis and control patients are very similar, and both share the same generalized structure previously determined for salivary, cervical, and intestinal mucins.

  11. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  12. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  13. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  14. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  15. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  16. Preformulation considerations for controlled release dosage forms. Part III. Candidate form selection using numerical weighting and scoring.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.

  17. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  18. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  19. Pseudo Bayes Estimates for Test Score Distributions and Chained Equipercentile Equating. Research Report. ETS RR-09-47

    ERIC Educational Resources Information Center

    Moses, Tim; Oh, Hyeonjoo J.

    2009-01-01

    Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…

  20. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  1. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  2. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  3. New paradoxes of risky decision making.

    PubMed

    Birnbaum, Michael H

    2008-04-01

    During the last 25 years, prospect theory and its successor, cumulative prospect theory, replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The new findings are consistent with and, in several cases, were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches. Although they have some similarities to later models called "rank-dependent utility," configural weight models do not satisfy coalescing, the assumption that branches leading to the same consequence can be combined by adding their probabilities. Nor do they satisfy cancellation, the "independence" assumption that branches common to both alternatives can be removed. The transfer of attention exchange model, with parameters estimated from previous data, correctly predicts results with all 11 new paradoxes. Apparently, people do not frame choices as prospects but, instead, as trees with branches.

  4. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  5. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  6. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  7. Stability and Structure of Star-Shape Granules

    NASA Astrophysics Data System (ADS)

    Zhao, Yuchen; Bares, Jonathan; Zheng, Matthew; Dierichs, Karola; Menges, Achim; Behringer, Robert

    2015-11-01

    Columns are made of convex non-cohesive grains like sand collapse after being released from initial positions. On the other hand, various architectures built by concave grains can maintain stability. We explore why these structures are stable, and how stable they can be. We performed experiments by randomly pouring identical star-shape particles into hollow cylinders left on glass and a rough base, and observed stable granular columns after lifting the cylinders. Particles have six 9 mm arms, which extend symmetrically in the xyz directions. Both the probability of creating a stable column and mechanical stability aspects have been investigated. We define r as the weight fraction of particles that fall out of the column after removing confinement. r gradually increases as the column height increases, or the column diameter decreases. We also explored different experiment conditions such as vibration of columns with confinement, or large basal friction. We also consider different stability measures such as the maximum inclination angle or maximum weight a column can support. In order to understand structure leading to stability, 3D CT scan reconstructions of columns have been done and coordination number and packing density will be discussed. We acknowledge supports from W.M.Keck Foundation and Research Triangle MRSEC.

  8. The affine cohomology spaces and its applications

    NASA Astrophysics Data System (ADS)

    Fraj, Nizar Ben; Laraiedh, Ismail

    2016-12-01

    We compute the nth cohomology space of the affine Lie superalgebra 𝔞𝔣𝔣(1) on the (1,1)-dimensional real superspace with coefficient in a large class of 𝔞𝔣𝔣(1)-modules M. We apply our results to the module of weight densities and the module of linear differential operators acting on a superspace of weighted densities. This work is the generalization of a result by Basdouri et al. [The linear 𝔞𝔣𝔣(n|1)-invariant differential operators on weighted densities on the superspace ℝ1|n and 𝔞𝔣𝔣(n|1)-relative cohomology, Int. J. Geom. Meth. Mod. Phys. 10 (2013), Article ID: 1320004, 9 pp.

  9. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  10. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  11. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  12. MR arthrography in chondromalacia patellae diagnosis on a low-field open magnet system.

    PubMed

    Harman, Mustafa; Ipeksoy, Umit; Dogan, Ali; Arslan, Halil; Etlik, Omer

    2003-01-01

    The purpose of this study was to compare the diagnostic efficacy conventional MRI and MR arthrography (MRA) in the diagnosis of chondromalacia patella (CP) on a low-field open magnet system (LFOMS), correlated with arthroscopy. Forty-two patients (50 knees) with pain in the anterior part of the knee were prospectively examined with LFOMS, including T1-weighted, proton density-weighted and T2-weighted sequences. All were also examined T1-weighted MRI after intraarticular injection of dilue gadopentetate dimeglumine. Two observers, who reached a consensus interpretation, evaluated each imaging technique independently. Thirty-six of the 50 facets examined had chondromalacia shown by arthroscopy, which was used as the standard of reference. The sensitivity, specificity and accuracy of each imaging technique in the diagnosis of each stage of CP were determined and compared by using the McNemar two-tailed analysis. Arthroscopy showed that 16 facets were normal. Four (30%) of 13 grade 1 lesions were detected with T1. Four lesions (30%) with T2 and three lesions (23%) with proton-weighted images were detected. Seven (53%) of 13 grade 1 lesions were detected with MRA. Grade 2 abnormalities were diagnosed in two (33%) of six facets with proton density-weighted pulse sequences, two (33%) of six facets with T1-weighted pulse sequences, in three (50%) of six facets with T2-weighted pulse sequences, in five (83%) of six facets with MRA sequences. Grade 3 abnormalities were diagnosed in three (71%) of seven facets with proton density- and T1-weighted images, five (71%) of seven facets with T2-weighted pulse sequences, six (85%) of seven facets with MRA sequences. Grade 4 CP was detected with equal sensitivity with T1-, proton density- and T2-weighted pulse sequences, all showing seven (87%) of the eight lesions. MRA again showed these findings in all eight patients. All imaging techniques were insensitive to grade 1 lesions and highly sensitive to grade 4 lesion, so that no significant difference among the techniques could be shown. All imaging technique studied had high specificity and accuracy in the detection and grading of CP; however, MRA was more sensitive than T1-weighted and proton density-weighted MR imaging on a LFOMS. Although the arthrographic techniques were not significantly better than T2-weighted imaging, the number of false-positive diagnosis was greatest with T2-weighted MRI.

  13. Improved Neuroimaging Atlas of the Dentate Nucleus.

    PubMed

    He, Naying; Langley, Jason; Huddleston, Daniel E; Ling, Huawei; Xu, Hongmin; Liu, Chunlei; Yan, Fuhua; Hu, Xiaoping P

    2017-12-01

    The dentate nucleus (DN) of the cerebellum is the major output nucleus of the cerebellum and is rich in iron. Quantitative susceptibility mapping (QSM) provides better iron-sensitive MRI contrast to delineate the boundary of the DN than either T 2 -weighted images or susceptibility-weighted images. Prior DN atlases used T 2 -weighted or susceptibility-weighted images to create DN atlases. Here, we employ QSM images to develop an improved dentate nucleus atlas for use in imaging studies. The DN was segmented in QSM images from 38 healthy volunteers. The resulting DN masks were transformed to a common space and averaged to generate the DN atlas. The center of mass of the left and right sides of the QSM-based DN atlas in the Montreal Neurological Institute space was -13.8, -55.8, and -36.4 mm, and 13.8, -55.7, and -36.4 mm, respectively. The maximal probability and mean probability of the DN atlas with the individually segmented DNs in this cohort were 100 and 39.3%, respectively, in contrast to the maximum probability of approximately 75% and the mean probability of 23.4 to 33.7% with earlier DN atlases. Using QSM, which provides superior iron-sensitive MRI contrast for delineating iron-rich structures, an improved atlas for the dentate nucleus has been generated. The atlas can be applied to investigate the role of the DN in both normal cortico-cerebellar physiology and the variety of disease states in which it is implicated.

  14. Investigation of mud density and weighting materials effect on drilling fluid filter cake properties and formation damage

    NASA Astrophysics Data System (ADS)

    Fattah, K. A.; Lashin, A.

    2016-05-01

    Drilling fluid density/type is an important factor in drilling and production operations. Most of encountered problems during rotary drilling are related to drilling mud types and weights. This paper aims to investigate the effect of mud weight on filter cake properties and formation damage through two experimental approaches. In the first approach, seven water-based drilling fluid samples with same composition are prepared with different densities (9.0-12.0 lb/gal) and examined to select the optimum mud weight that has less damage. The second approach deals with investigating the possible effect of the different weighting materials (BaSO4 and CaCO3) on filter cake properties. High pressure/high temperature loss tests and Scanning Electron Microscopy (SEM) analyses were carried out on the filter cake (two selected samples). Data analysis has revealed that mud weigh of 9.5 lb/gal has the less reduction in permeability of ceramic disk, among the seven used mud densities. Above 10.5 ppg the effect of the mud weight density on formation damage is stabilized at constant value. Fluids of CaCO3-based weighting material, has less reduction in the porosity (9.14%) and permeability (25%) of the filter disk properties than the BaSO4-based fluid. The produced filter cake porosity increases (from 0.735 to 0.859) with decreasing of fluid density in case of drilling samples of different densities. The filtration loss tests indicated that CaCO3 filter cake porosity (0.52) is less than that of the BaSO4 weighted material (0.814). The thickness of the filter cake of the BaSO4-based fluid is large and can cause some problems. The SEM analysis shows that some major elements do occur on the tested samples (Ca, Al, Si, and Ba), with dominance of Ca on the expense of Ba for the CaCO3 fluid sample and vice versa. The less effect of 9.5 lb/gal mud sample is reflected in the well-produced inter-particle pore structure and relatively crystal size. A general recommendation is given to minimize the future utilization of Barium Sulfate as a drilling fluid.

  15. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  16. Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012).

    PubMed

    Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C

    2015-03-01

    A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.

  17. Minimum Expected Risk Estimation for Near-neighbor Classification

    DTIC Science & Technology

    2006-04-01

    We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights

  18. The triglyceride to high-density lipoprotein-cholesterol ratio in adolescence and subsequent weight gain predict nuclear magnetic resonance-measured lipoprotein subclasses in adulthood.

    PubMed

    Weiss, Ram; Otvos, James D; Sinnreich, Ronit; Miserez, Andre R; Kark, Jeremy D

    2011-01-01

    To assess whether the fasting triglyceride-to-high-density lipoprotein (HDL)-cholesterol (TG/HDL) ratio in adolescence is predictive of a proatherogenic lipid profile in adulthood. A longitudinal follow-up of 770 Israeli adolescents 16 to 17 years of age who participated in the Jerusalem Lipid Research Clinic study and were reevaluated 13 years later. Lipoprotein particle size was assessed at the follow-up with proton nuclear magnetic resonance. The TG/HDL ratio measured in adolescence was strongly associated with low-density lipoprotein, very low-density lipoprotein (VLDL), and HDL mean particle size in young adulthood in both sexes, even after adjustment for baseline body mass index and body mass index change. The TG/HDL ratio measured in adolescence and subsequent weight gain independently predicted atherogenic small low-density lipoprotein and large VLDL particle concentrations (P < .001 in both sexes). Baseline TG/HDL and weight gain interacted to increase large VLDL concentration in men (P < .001). Adolescents with an elevated TG/HDL ratio are prone to express a proatherogenic lipid profile in adulthood. This profile is additionally worsened by weight gain. Copyright © 2011 Mosby, Inc. All rights reserved.

  19. Significance of Epicardial and Intrathoracic Adipose Tissue Volume among Type 1 Diabetes Patients in the DCCT/EDIC: A Pilot Study

    PubMed Central

    Budoff, Matthew J.

    2016-01-01

    Introduction Type 1 diabetes (T1DM) patients are at increased risk of coronary artery disease (CAD). This pilot study sought to evaluate the relationship between epicardial adipose tissue (EAT) and intra-thoracic adipose tissue (IAT) volumes and cardio-metabolic risk factors in T1DM. Method EAT/IAT volumes in 100 patients, underwent non-contrast cardiac computed tomography in the Diabetes Control and Complications Trial /Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) study were measured by a certified reader. Fat was defined as pixels’ density of -30 to -190 Hounsfield Unit. The associations were assessed using–Pearson partial correlation and linear regression models adjusted for gender and age with inverse probability sample weighting. Results The weighted mean age was 43 years (range 32–57) and 53% were male. Adjusted for gender, Pearson correlation analysis showed a significant correlation between age and EAT/IAT volumes (both p<0.001). After adjusting for gender and age, participants with greater BMI, higher waist to hip ratio (WTH), higher weighted HbA1c, elevated triglyceride level, and a history of albumin excretion rate of equal or greater than 300 mg/d (AER≥300) or end stage renal disease (ESRD) had significantly larger EAT/IAT volumes. Conclusion T1DM patients with greater BMI, WTH ratio, weighted HbA1c level, triglyceride level and AER≥300/ESRD had significantly larger EAT/IAT volumes. Larger sample size studies are recommended to evaluate independency. PMID:27459689

  20. Walnut consumption in a weight reduction intervention: effects on body weight, biological measures, blood pressure and satiety.

    PubMed

    Rock, Cheryl L; Flatt, Shirley W; Barkai, Hava-Shoshana; Pakiz, Bilge; Heath, Dennis D

    2017-12-04

    Dietary strategies that help patients adhere to a weight reduction diet may increase the likelihood of weight loss maintenance and improved long-term health outcomes. Regular nut consumption has been associated with better weight management and less adiposity. The objective of this study was to compare the effects of a walnut-enriched reduced-energy diet to a standard reduced-energy-density diet on weight, cardiovascular disease risk factors, and satiety. Overweight and obese men and women (n = 100) were randomly assigned to a standard reduced-energy-density diet or a walnut-enriched (15% of energy) reduced-energy diet in the context of a behavioral weight loss intervention. Measurements were obtained at baseline and 3- and 6-month clinic visits. Participants rated hunger, fullness and anticipated prospective consumption at 3 time points during the intervention. Body measurements, blood pressure, physical activity, lipids, tocopherols and fatty acids were analyzed using repeated measures mixed models. Both study groups reduced body weight, body mass index and waist circumference (time effect p < 0.001 for each). Change in weight was -9.4 (0.9)% vs. -8.9 (0.7)% (mean [SE]), for the standard vs. walnut-enriched diet groups, respectively. Systolic blood pressure decreased in both groups at 3 months, but only the walnut-enriched diet group maintained a lower systolic blood pressure at 6 months. The walnut-enriched diet group, but not the standard reduced-energy-density diet group, reduced total cholesterol and low-density lipoprotein cholesterol (LDL-C) at 6 months, from 203 to 194 mg/dL and 121 to 112 mg/dL, respectively (p < 0.05). Self-reported satiety was similar in the groups. These findings provide further evidence that a walnut-enriched reduced-energy diet can promote weight loss that is comparable to a standard reduced-energy-density diet in the context of a behavioral weight loss intervention. Although weight loss in response to both dietary strategies was associated with improvements in cardiovascular disease risk factors, the walnut-enriched diet promoted more favorable effects on LDL-C and systolic blood pressure. The trial is registered at ( NCT02501889 ).

  1. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  2. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  3. Reduced bone density in androgen-deficient women with acquired immune deficiency syndrome wasting.

    PubMed

    Huang, J S; Wilkie, S J; Sullivan, M P; Grinspoon, S

    2001-08-01

    Women with acquired immune deficiency syndrome wasting are at an increased risk of osteopenia because of low weight, changes in body composition, and hormonal alterations. Although women comprise an increasing proportion of human immunodeficiency virus-infected patients, prior studies have not investigated bone loss in this expanding population of patients. In this study we investigated bone density, bone turnover, and hormonal parameters in 28 women with acquired immune deficiency syndrome wasting and relative androgen deficiency (defined as free testosterone < or =3.0 pg/ml, weight < or =90% ideal body weight, weight loss > or =10% from preillness maximum weight, or weight <100% ideal body weight with weight loss > or =5% from preillness maximum weight). Total body (1.04 +/- 0.08 vs. 1.10 +/- 0.07 g/cm2, human immunodeficiency virus-infected vs. control respectively; P < 0.01), anteroposterior lumbar spine (0.94 +/- 0.12 vs. 1.03 +/- 0.09 g/cm2; P = 0.005), lateral lumbar spine (0.71 +/- 0.14 vs. 0.79 +/- 0.09 g/cm2; P = 0.02), and hip (Ward's triangle; 0.68 +/- 0.14 vs. 0.76 +/- 0.12 g/cm2; P = 0.05) bone density were reduced in the human immunodeficiency virus-infected compared with control subjects. Serum N-telopeptide, a measure of bone resorption, was increased in human immunodeficiency virus-infected patients, compared with control subjects (14.6 +/- 5.8 vs. 11.3 +/- 3.8 nmol/liter bone collagen equivalents, human immunodeficiency virus-infected vs. control respectively; P = 0.03). Although body mass index was similar between the groups, muscle mass was significantly reduced in the human immunodeficiency virus-infected vs. control subjects (16 +/- 4 vs. 21 +/- 4 kg, human immunodeficiency virus-infected vs. control, respectively; P < 0.0001). In univariate regression analysis, muscle mass (r = 0.53; P = 0.004) and estrogen (r = 0.51; P = 0.008), but not free testosterone (r = -0.05, P = 0.81), were strongly associated with lumbar spine bone density in the human immunodeficiency virus-infected patients. The association between muscle mass and bone density remained significant, controlling for body mass index, hormonal status, and age (P = 0.048) in multivariate regression analysis. These data indicate that both hormonal and body composition factors contribute to reduced bone density in women with acquired immune deficiency syndrome wasting. Anabolic strategies to increase muscle mass may be useful to increase bone density among osteopenic women with acquired immune deficiency syndrome wasting.

  4. Contamination characteristics and degradation behavior of low-density polyethylene film residues in typical farmland soils of China.

    PubMed

    Xu, Gang; Wang, Qunhui; Gu, Qingbao; Cao, Yunzhe; DU, Xiaoming; Li, Fasheng

    2006-01-01

    Low-density polyethylene (LDPE) film residues left in farmlands due to agricultural activities were extensively investigated to evaluate the present pollution situation by selecting the typical areas with LDPE film application, including Harbin, Baoding, and Handan of China. The survey results demonstrated that the film residues were ubiquitous within the investigated areas and the amount reached 2,400-8,200 g ha(-1). Breakage rates of the film residues were almost at the same level in the studied fields. There were relatively small amounts of film residues remaining in neighboring farmland fields without application of LDPE film. The studies showed that the sheets of LDPE residues had the same oxidative deterioration, which was probably due to photodegradation instead of biodegradation. The higher molecular weight components of the LDPE film gradually decreased, which were reflected by the appearance of some small flakes detached from the film bodies. LDPE films in the investigated fields gradually deteriorated and the decomposing levels developed with their left time increasing. The degradation behaviors of LDPE films were confirmed by using Fourier transform infrared (FTIR), scanning electron microscopic (SEM), and gel permeation chromatography analyses.

  5. Spatial Analysis of PAHs in Soils along an Urban-Suburban-Rural Gradient: scale effect, distribution patterns, diffusion and influencing factors

    NASA Astrophysics Data System (ADS)

    Peng, Chi; Wang, Meie; Chen, Weiping

    2016-11-01

    Spatial statistical methods including Cokriging interpolation, Morans I analysis, and geographically weighted regression (GWR) were used for studying the spatial characteristics of polycyclic aromatic hydrocarbon (PAH) accumulation in urban, suburban, and rural soils of Beijing. The concentrations of PAHs decreased spatially as the level of urbanization decreased. Generally, PAHs in soil showed two spatial patterns on the regional scale: (1) regional baseline depositions with a radius of 16.5 km related to the level of urbanization and (2) isolated pockets of soil contaminated with PAHs were found up to around 3.5 km from industrial point sources. In the urban areas, soil PAHs showed high spatial heterogeneity on the block scale, which was probably related to vegetation cover, land use, and physical soil disturbance. The distribution of total PAHs in urban blocks was unrelated to the indicators of the intensity of anthropogenic activity, namely population density, light intensity at night, and road density, but was significantly related to the same indicators in the suburban and rural areas. The moving averages of molecular ratios suggested that PAHs in the suburban and rural soils were a mix of local emissions and diffusion from urban areas.

  6. The effects of seed size on hybrids formed between oilseed rape (Brassica napus) and wild brown mustard (B. juncea).

    PubMed

    Liu, Yong-Bo; Tang, Zhi-Xi; Darmency, Henri; Stewart, C Neal; Di, Kun; Wei, Wei; Ma, Ke-ping

    2012-01-01

    Seed size has significant implications in ecology, because of its effects on plant fitness. The hybrid seeds that result from crosses between crops and their wild relatives are often small, and the consequences of this have been poorly investigated. Here we report on plant performance of hybrid and its parental transgenic oilseed rape (Brassica napus) and wild B. juncea, all grown from seeds sorted into three seed-size categories. Three seed-size categories were sorted by seed diameter for transgenic B. napus, wild B. juncea and their transgenic and non-transgenic hybrids. The seeds were sown in a field at various plant densities. Globally, small-seeded plants had delayed flowering, lower biomass, fewer flowers and seeds, and a lower thousand-seed weight. The seed-size effect varied among plant types but was not affected by plant density. There was no negative effect of seed size in hybrids, but it was correlated with reduced growth for both parents. Our results imply that the risk of further gene flow would probably not be mitigated by the small size of transgenic hybrid seeds. No fitness cost was detected to be associated with the Bt-transgene in this study.

  7. Development of Ocean-Bottom Seismograph in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, H.; Jang, J. P.; Chen, P.; Lin, C. R.; Kuo, B. Y.; Wang, C. C.; Kim, K. H.; Lin, P. P.

    2016-12-01

    Yardbird-20s, one type of Ocean-Bottom Seismograph (OBS), is fabricated by Taiwan Ocean Research Institute (TORI), the Institute of Earth Science of Academia Sinica and the Institute of Undersea Technology of the National Sun Yat-Sen University in Taiwan. Yardbirds can be deployed up to 5000m deep for up to 15 months. The total weight with anchor in the air is about 170Kg. The rising and sinking rate is about 0.8 m/s. We utilized ultra-low power micro control unit (MCU) and SD card to design a data logger. The sensors are three of 4.5Hz geophones that were extended the lower frequency response to 20 sec. The sensor module also includes the leveling system, which is design by dual-axis DC motor-driven module to level the vertical component to be less than 0.1 degree with respect to the gravity. Yardbirds have been successfully deployed and recovered in several research cruises in Taiwan and Korea. In this study, we'll also display the data quality and power spectral density (PSD) calculations, probability density function (PDF) plots and from the Yardbirds that deployed and recovered in the East Sea near sough-east of Korea.

  8. Averaged kick maps: less noise, more signal…and probably less bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor

    2009-09-01

    Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less

  9. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  10. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  11. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  12. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  13. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  14. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  16. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  17. Predictors of breeding site occupancy by amphibians in montane landscapes

    USGS Publications Warehouse

    Groff, Luke A.; Loftin, Cynthia S.; Calhoun, Aram J.K.

    2017-01-01

    Ecological relationships and processes vary across species’ geographic distributions, life stages and spatial, and temporal scales. Montane landscapes are characterized by low wetland densities, rugged topographies, and cold climates. Consequently, aquatic-dependent and low-vagility ectothermic species (e.g., pool-breeding amphibians) may exhibit unique ecological associations in montane landscapes. We evaluated the relative importance of breeding- and landscape-scale features associated with spotted salamander (Ambystoma maculatum) and wood frog (Lithobates sylvaticus) wetland occupancy in Maine's Upper Montane-Alpine Zone ecoregion, and we determined whether models performed better when the inclusive landscape-scale covariates were estimated with topography-weighted or circular buffers. We surveyed 135 potential breeding sites during May 2013–June 2014 and evaluated environmental relationships with multi-season implicit dynamics occupancy models. Breeding site occupancy by both species was influenced solely by breeding-scale habitat features. Spotted salamander occupancy probabilities increased with previous or current beaver (Castor canadensis) presence, and models generally were better supported when the inclusive landscape-scale covariates were estimated with topography-weighted rather than circular buffers. Wood frog occupancy probabilities increased with site area and percent shallows, but neither buffer type was better supported than the other. Model rank order and support varied between buffer types, but model inferences did not. Our results suggest pool-breeding amphibian conservation in montane Maine include measures to maintain beaver populations and large wetlands with proportionally large areas of shallows ≤1-m deep. Inconsistencies between our study and previous studies substantiate the value of region-specific research for augmenting species’ conservation management plans and suggest the application of out-of-region inferences may promote ineffective conservation. 

  18. Life history and virulence are linked in the ectoparasitic salmon louse Lepeophtheirus salmonis.

    PubMed

    Mennerat, A; Hamre, L; Ebert, D; Nilsen, F; Dávidová, M; Skorping, A

    2012-05-01

    Models of virulence evolution for horizontally transmitted parasites often assume that transmission rate (the probability that an infected host infects a susceptible host) and virulence (the increase in host mortality due to infection) are positively correlated, because higher rates of production of propagules may cause more damages to the host. However, empirical support for this assumption is scant and limited to microparasites. To fill this gap, we explored the relationships between parasite life history and virulence in the salmon louse, Lepeophtheirus salmonis, a horizontally transmitted copepod ectoparasite on Atlantic salmon Salmo salar. In the laboratory, we infected juvenile salmon hosts with equal doses of infective L. salmonis larvae and monitored parasite age at first reproduction, parasite fecundity, area of damage caused on the skin of the host, and host weight and length gain. We found that earlier onset of parasite reproduction was associated with higher parasite fecundity. Moreover, higher parasite fecundity (a proxy for transmission rate, as infection probability increases with higher numbers of parasite larvae released to the water) was associated with lower host weight gain (correlated with lower survival in juvenile salmon), supporting the presence of a virulence-transmission trade-off. Our results are relevant in the context of increasing intensive farming, where frequent anti-parasite drug use and increased host density may have selected for faster production of parasite transmission stages, via earlier reproduction and increased early fecundity. Our study highlights that salmon lice, therefore, are a good model for studying how human activity may affect the evolution of parasite virulence. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  19. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  20. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  1. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  2. Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation

    USGS Publications Warehouse

    Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.

    1998-01-01

    We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.

  3. It's Not All About Calories

    Cancer.gov

    Losing weight is about balancing calories in (food and drink) with calories out (exercise). Sounds simple, right? But if it were that simple, you and the millions of other women struggling with their weight probably would have figured it out.

  4. Modeling the solute transport by particle-tracing method with variable weights

    NASA Astrophysics Data System (ADS)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  5. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  6. Understanding PSA and its derivatives in prediction of tumor volume: addressing health disparities in prostate cancer risk stratification

    PubMed Central

    Chinea, Felix M; Lyapichev, Kirill; Epstein, Jonathan I; Kwon, Deukwoo; Smith, Paul Taylor; Pollack, Alan; Cote, Richard J; Kryvenko, Oleksandr N

    2017-01-01

    Objectives To address health disparities in risk stratification of U.S. Hispanic/Latino men by characterizing influences of prostate weight, body mass index, and race/ethnicity on the correlation of PSA derivatives with Gleason score 6 (Grade Group 1) tumor volume in a diverse cohort. Results Using published PSA density and PSA mass density cutoff values, men with higher body mass indices and prostate weights were less likely to have a tumor volume <0.5 cm3. Variability across race/ethnicity was found in the univariable analysis for all PSA derivatives when predicting for tumor volume. In receiver operator characteristic analysis, area under the curve values for all PSA derivatives varied across race/ethnicity with lower optimal cutoff values for Hispanic/Latino (PSA=2.79, PSA density=0.06, PSA mass=0.37, PSA mass density=0.011) and Non-Hispanic Black (PSA=3.75, PSA density=0.07, PSA mass=0.46, PSA mass density=0.008) compared to Non-Hispanic White men (PSA=4.20, PSA density=0.11 PSA mass=0.53, PSA mass density=0.014). Materials and Methods We retrospectively analyzed 589 patients with low-risk prostate cancer at radical prostatectomy. Pre-operative PSA, patient height, body weight, and prostate weight were used to calculate all PSA derivatives. Receiver operating characteristic curves were constructed for each PSA derivative per racial/ethnic group to establish optimal cutoff values predicting for tumor volume ≥0.5 cm3. Conclusions Increasing prostate weight and body mass index negatively influence PSA derivatives for predicting tumor volume. PSA derivatives’ ability to predict tumor volume varies significantly across race/ethnicity. Hispanic/Latino and Non-Hispanic Black men have lower optimal cutoff values for all PSA derivatives, which may impact risk assessment for prostate cancer. PMID:28160549

  7. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  8. Long Range Materials Research

    DTIC Science & Technology

    1976-01-01

    the "weight In water -weight In air" method. The compact Is first weighed dry and then weighed while Immersed completely In water . The density was...calculated from the following: (Wdry) ^ compact W. - W dry wet (1) where P^-o ^8 ’^ density of water corrected for temperature. The weight wet...PPh9) ’"NXV 2 (4) ’ X CH20TS + EtO e These bonds are easily cleaved by water and alcohols . Thererore, it should be possible to

  9. Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions

    NASA Astrophysics Data System (ADS)

    Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang

    2000-02-01

    We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.

  10. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  11. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  12. Image-Based Modeling Reveals Dynamic Redistribution of DNA Damageinto Nuclear Sub-Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costes Sylvain V., Ponomarev Artem, Chen James L.; Nguyen, David; Cucinotta, Francis A.

    2007-08-03

    Several proteins involved in the response to DNA doublestrand breaks (DSB) f orm microscopically visible nuclear domains, orfoci, after exposure to ionizing radiation. Radiation-induced foci (RIF)are believed to be located where DNA damage occurs. To test thisassumption, we analyzed the spatial distribution of 53BP1, phosphorylatedATM, and gammaH2AX RIF in cells irradiated with high linear energytransfer (LET) radiation and low LET. Since energy is randomly depositedalong high-LET particle paths, RIF along these paths should also berandomly distributed. The probability to induce DSB can be derived fromDNA fragment data measured experimentally by pulsed-field gelelectrophoresis. We used this probability in Monte Carlo simulationsmore » topredict DSB locations in synthetic nuclei geometrically described by acomplete set of human chromosomes, taking into account microscope opticsfrom real experiments. As expected, simulations produced DNA-weightedrandom (Poisson) distributions. In contrast, the distributions of RIFobtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) werenon-random. This deviation from the expected DNA-weighted random patterncan be further characterized by "relative DNA image measurements." Thisnovel imaging approach shows that RIF were located preferentially at theinterface between high and low DNA density regions, and were morefrequent than predicted in regions with lower DNA density. The samepreferential nuclear location was also measured for RIF induced by 1 Gyof low-LET radiation. This deviation from random behavior was evidentonly 5 min after irradiation for phosphorylated ATM RIF, while gammaH2AXand 53BP1 RIF showed pronounced deviations up to 30 min after exposure.These data suggest that DNA damage induced foci are restricted to certainregions of the nucleus of human epithelial cells. It is possible that DNAlesions are collected in these nuclear sub-domains for more efficientrepair.« less

  13. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural Cox model context.

    PubMed

    Karim, Mohammad Ehsanul; Platt, Robert W

    2017-06-15

    Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    PubMed Central

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  15. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    USGS Publications Warehouse

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  16. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  17. Attenuated associations between increasing BMI and unfavorable lipid profiles in Chinese Buddhist vegetarians.

    PubMed

    Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun

    2013-01-01

    Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.

  18. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  19. Testing the criterion for correct convergence in the complex Langevin method

    NASA Astrophysics Data System (ADS)

    Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji

    2018-05-01

    Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.

  20. International Space Station (ISS) Meteoroid/Orbital Debris Shielding

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.

    1999-01-01

    Design practices to provide protection for International Space Station (ISS) crew and critical equipment from meteoroid and orbital debris (M/OD) Impacts have been developed. Damage modes and failure criteria are defined for each spacecraft system. Hypervolocity Impact -1 - and analyses are used to develop ballistic limit equations (BLEs) for each exposed spacecraft system. BLEs define Impact particle sizes that result in threshold failure of a particular spacecraft system as a function of Impact velocity, angles and particle density. The BUMPER computer code Is used to determine the probability of no penetration (PNP) that falls the spacecraft shielding based on NASA standard meteoroid/debris models, a spacecraft geometry model, and the BLEs. BUMPER results are used to verify spacecraft shielding requirements Low-weight, high-performance shielding alternatives have been developed at the NASA Johnson Space Center (JSC) Hypervelocity Impact Technology Facility (HITF) to meet spacecraft protection requirements.

  1. Quantification of brain tissue through incorporation of partial volume effects

    NASA Astrophysics Data System (ADS)

    Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.

    1992-06-01

    This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.

  2. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  3. A partial differential equation for pseudocontact shift.

    PubMed

    Charnock, G T P; Kuprov, Ilya

    2014-10-07

    It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.

  4. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  5. Precipitation Cluster Distributions: Current Climate Storm Statistics and Projected Changes Under Global Warming

    NASA Astrophysics Data System (ADS)

    Quinn, Kevin Martin

    The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.

  6. Mechanical and thermal properties of high density polyethylene – dried distillers grains with solubles composites

    USDA-ARS?s Scientific Manuscript database

    Dried Distillers Grain with Solubles (DDGS) is evaluated as a bio-based fiber reinforcement. Injection molded composites of high density polyethylene (HDPE), 25% by weight of DDGS, and either 5% of 0% by weight of maleated polyethylene (MAPE) were produced by twin screw compounding and injection mo...

  7. Properties of high density polyethylene – Paulownia wood flour composites via injection molding

    USDA-ARS?s Scientific Manuscript database

    Paulownia wood (PW) flour is evaluated as a bio-based fiber reinforcement. Composites of high density polyethylene (HDPE), 25% by weight of PW, and either 0% or 5% by weight of maleated polyethylene (MAPE) were produced by twin screw compounding followed by injection molding. Molded test composite...

  8. Mechanical properties of high density polyethylene--pennycress press cake composites

    USDA-ARS?s Scientific Manuscript database

    Pennycress press cake (PPC) is evaluated as a bio-based fiber reinforcement. PPC is a by-product of crop seed oil extraction. Composites with a high density polyethylene (HDPE) matrix are created by twin screw compounding of 25% by weight of PPC and either 0% or 5% by weight of maleated polyethyle...

  9. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  10. Computation of the Complex Probability Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Ledwith, Patrick John

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  11. Egg breakfast enhances weight loss

    PubMed Central

    Vander Wal, JS; Gupta, A; Khosla, P; Dhurandhar, NV

    2009-01-01

    Objective To test the hypotheses that an egg breakfast, in contrast to a bagel breakfast matched for energy density and total energy, would enhance weight loss in overweight and obese participants while on a reduced-calorie weight loss diet. Subjects Men and women (n=152), age 25–60 years, body mass index (BMI) ≥25 and ≤50 kg m−2. Design Otherwise healthy overweight or obese participants were assigned to Egg (E), Egg Diet (ED), Bagel (B) or Bagel Diet (BD) groups, based on the prescription of either an egg breakfast containing two eggs (340 kcal) or a breakfast containing bagels matched for energy density and total energy, for at least 5 days per week, respectively. The ED and BD groups were suggested a 1000 kcal energy-deficit low-fat diet, whereas the B and E groups were asked not to change their energy intake. Results After 8 weeks, in comparison to the BD group, the ED group showed a 61% greater reduction in BMI (−0.95±0.82 vs −0.59±0.85, P<0.05), a 65% greater weight loss (−2.63±2.33 vs −1.59±2.38 kg, P<0.05), a 34% greater reduction in waist circumference (P<0.06) and a 16% greater reduction in percent body fat (P=not significant). No significant differences between the E and B groups on the aforementioned variables were obtained. Further, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol and triglycerides, did not differ between the groups. Conclusions The egg breakfast enhances weight loss, when combined with an energy-deficit diet, but does not induce weight loss in a free-living condition. The inclusion of eggs in a weight management program may offer a nutritious supplement to enhance weight loss. PMID:18679412

  12. Effects of Rapid Weight Loss on Systemic and Adipose Tissue Inflammation and Metabolism in Obese Postmenopausal Women.

    PubMed

    Alemán, José O; Iyengar, Neil M; Walker, Jeanne M; Milne, Ginger L; Da Rosa, Joel Correa; Liang, Yupu; Giri, Dilip D; Zhou, Xi Kathy; Pollak, Michael N; Hudis, Clifford A; Breslow, Jan L; Holt, Peter R; Dannenberg, Andrew J

    2017-06-01

    Obesity is associated with subclinical white adipose tissue inflammation, as defined by the presence of crown-like structures (CLSs) consisting of dead or dying adipocytes encircled by macrophages. In humans, bariatric surgery-induced weight loss leads to a decrease in CLSs, but the effects of rapid diet-induced weight loss on CLSs and metabolism are unclear. To determine the effects of rapid very-low-calorie diet-induced weight loss on CLS density, systemic biomarkers of inflammation, and metabolism in obese postmenopausal women. Prospective cohort study. Rockefeller University Hospital, New York, NY. Ten obese, postmenopausal women with a mean age of 60.6 years (standard deviation, ±3.6 years). Effects on CLS density and gene expression in abdominal subcutaneous adipose tissue, cardiometabolic risk factors, white blood count, circulating metabolites, and oxidative stress (urinary isoprostane-M) were measured. Obese subjects lost approximately 10% body weight over a mean of 46 days. CLS density increased in subcutaneous adipose tissue without an associated increase in proinflammatory gene expression. Weight loss was accompanied by decreased fasting blood levels of high-sensitivity C-reactive protein, glucose, lactate, and kynurenine, and increased circulating levels of free fatty acids, glycerol, β -hydroxybutyrate, and 25 hydroxyvitamin D. Levels of urinary isoprostane-M declined. Rapid weight loss stimulated lipolysis and an increase in CLS density in subcutaneous adipose tissue in association with changes in levels of circulating metabolites, and improved systemic biomarkers of inflammation and insulin resistance. The observed change in levels of metabolites ( i.e. , lactate, β -hydroxybutyrate, 25 hydroxyvitamin D) may contribute to the anti-inflammatory effect of rapid weight loss.

  13. Effects of Rapid Weight Loss on Systemic and Adipose Tissue Inflammation and Metabolism in Obese Postmenopausal Women

    PubMed Central

    Iyengar, Neil M.; Walker, Jeanne M.; Milne, Ginger L.; Da Rosa, Joel Correa; Liang, Yupu; Giri, Dilip D.; Zhou, Xi Kathy; Pollak, Michael N.; Hudis, Clifford A.; Breslow, Jan L.; Holt, Peter R.; Dannenberg, Andrew J.

    2017-01-01

    Context: Obesity is associated with subclinical white adipose tissue inflammation, as defined by the presence of crown-like structures (CLSs) consisting of dead or dying adipocytes encircled by macrophages. In humans, bariatric surgery-induced weight loss leads to a decrease in CLSs, but the effects of rapid diet-induced weight loss on CLSs and metabolism are unclear. Objective: To determine the effects of rapid very-low-calorie diet-induced weight loss on CLS density, systemic biomarkers of inflammation, and metabolism in obese postmenopausal women. Design: Prospective cohort study. Setting: Rockefeller University Hospital, New York, NY. Participants: Ten obese, postmenopausal women with a mean age of 60.6 years (standard deviation, ±3.6 years). Main Outcome Measures: Effects on CLS density and gene expression in abdominal subcutaneous adipose tissue, cardiometabolic risk factors, white blood count, circulating metabolites, and oxidative stress (urinary isoprostane-M) were measured. Results: Obese subjects lost approximately 10% body weight over a mean of 46 days. CLS density increased in subcutaneous adipose tissue without an associated increase in proinflammatory gene expression. Weight loss was accompanied by decreased fasting blood levels of high-sensitivity C-reactive protein, glucose, lactate, and kynurenine, and increased circulating levels of free fatty acids, glycerol, β-hydroxybutyrate, and 25 hydroxyvitamin D. Levels of urinary isoprostane-M declined. Conclusion: Rapid weight loss stimulated lipolysis and an increase in CLS density in subcutaneous adipose tissue in association with changes in levels of circulating metabolites, and improved systemic biomarkers of inflammation and insulin resistance. The observed change in levels of metabolites (i.e., lactate, β-hydroxybutyrate, 25 hydroxyvitamin D) may contribute to the anti-inflammatory effect of rapid weight loss. PMID:29264516

  14. The role of demographic compensation theory in incidental take assessments for endangered species

    USGS Publications Warehouse

    McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts

    2011-01-01

    Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.

  15. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  16. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  17. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  18. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  19. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  20. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  1. Information Density and Syntactic Repetition.

    PubMed

    Temperley, David; Gildea, Daniel

    2015-11-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastner, S.O.; Bhatia, A.K.

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less

  3. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    PubMed

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.

  4. Seismic catalog condensation with applications to multifractal analysis of South Californian seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    Latest advances in the instrumentation field have increased the station coverage and lowered event detection thresholds. This has resulted in a vast increase in the number of located events with each year. The abundance of data comes as a double edged sword: while it facilitates more robust statistics and provides better confidence intervals, it also paralyzes computations whose execution times grow exponentially with the number of data points. In this study, we present a novel method that assesses the relative importance of each data point, reduces the size of datasets while preserving the information content. For a given seismic catalog, the goal is to express the same spatial probability density distribution with fewer data points. To achieve this, we exploit the fact that seismic catalogs are not optimally encoded. This coding deficiency is the result of the sequential data entry where new events are added without taking into account previous ones. For instance, if there are several events with identical parameters occurring at the same location, these could be grouped together rather than occupying the same memory space as if they were distinct events. Following this reasoning, the proposed condensation methodology is implemented by grouping all event according to their overall variance, starting from the group with the highest variance (worst location uncertainty), each event is sampled by a number of sample points, these points are then used to calculate which better located events are able to express these probable locations with a higher likelihood. Based on these likelihood comparisons, weights from poorly located events are successively transferred to better located ones. As a result of the process, a large portion of the events (~30%) ends up with zero weights (thus being fully represented by events increasing their weights), while the information content (i.e the sum of all weights) remains preserved. The resulting condensed catalog not only provides more optimally encoding but is also regularized with respect to the local information quality. By investigating the locations of mass enrichment and depletion at different scales, we observe that the areas of increased mass are in good agreement with reported surface fault traces. We also conduct multifractal spatial analysis on condensed catalogs and investigate different spatial scaling regimes made clearer by reducing the effect of location uncertainty.

  5. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  6. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  7. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  8. Fast-food outlets and walkability in school neighbourhoods predict fatness in boys and height in girls: a Taiwanese population study.

    PubMed

    Chiang, Po-Huang; Wahlqvist, Mark L; Lee, Meei-Shyuan; Huang, Lin-Yuan; Chen, Hui-Hsin; Huang, Susana Tzy-Ying

    2011-09-01

    There is increasing evidence that the school food environment contributes to childhood obesity and health in various locations. We investigated the influence of fast-food stores and convenience food stores (FS and CS, respectively) on growth and body composition in a range of residential densities for North-east Asian food culture. Anthropometrics and birth weight of schoolchildren were obtained. Geocoded mapping of schools and food outlets was conducted. Multivariable linear regression models, adjusted for father's ethnicity and education, as well as for household income, pocket money, birth weight, physical activity, television watching, food quality and region, were used to predict body composition from school food environments. Elementary schools and school neighbourhoods in 359 townships/districts of Taiwan. A total of 2283 schoolchildren aged 6-13 years from the Elementary School Children's Nutrition and Health Survey in Taiwan conducted in 2001-2002. Remote and socially disadvantaged locations had the highest prevalence of lower weight, BMI, waist circumference and triceps skinfold thickness. Food store densities, FS and CS, were highest in urban Taiwan and lowest in remote Taiwan. In the fully adjusted models, FS densities predicted weight and BMI in boys; there was a similar association for waist circumference, except when adjusted for region. FS densities also predicted height for girls. Except for weight and BMI in boys, CS did not have effects evident with FS for either boys or girls. A high FS density, more than CS density, in Taiwan increased the risk of general (BMI) and abdominal (waist circumference) obesity in boys and stature in girls. These findings have long-term implications for chronic disease in adulthood.

  9. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command.

    PubMed

    Daza, Eric J; Hudgens, Michael G; Herring, Amy H

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241-258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study.

  11. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  12. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command

    PubMed Central

    Hudgens, Michael G.; Herring, Amy H.

    2017-01-01

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241–258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study. PMID:29755297

  13. Exaggerated risk: prospect theory and probability weighting in risky choice.

    PubMed

    Kusev, Petko; van Schaik, Paul; Ayton, Peter; Dent, John; Chater, Nick

    2009-11-01

    In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahneman's (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that people's experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.

  14. Alcohol outlet density and violence: A geographically weighted regression approach.

    PubMed

    Cameron, Michael P; Cochrane, William; Gordon, Craig; Livingston, Michael

    2016-05-01

    We investigate the relationship between outlet density (of different types) and violence (as measured by police activity) across the North Island of New Zealand, specifically looking at whether the relationships vary spatially. We use New Zealand data at the census area unit (approximately suburb) level, on police-attended violent incidents and outlet density (by type of outlet), controlling for population density and local social deprivation. We employed geographically weighted regression to obtain both global average and locally specific estimates of the relationships between alcohol outlet density and violence. We find that bar and night club density, and licensed club density (e.g. sports clubs) have statistically significant and positive relationships with violence, with an additional bar or night club is associated with nearly 5.3 additional violent events per year, and an additional licensed club associated with 0.8 additional violent events per year. These relationships do not show significant spatial variation. In contrast, the effects of off-licence density and restaurant/café density do exhibit significant spatial variation. However, the non-varying effects of bar and night club density are larger than the locally specific effects of other outlet types. The relationships between outlet density and violence vary significantly across space for off-licences and restaurants/cafés. These results suggest that in order to minimise alcohol-related harms, such as violence, locally specific policy interventions are likely to be necessary. [Cameron MP, Cochrane W, Gordon C, Livingston M. Alcohol outlet density and violence: A geographically weighted regression approach. Drug Alcohol Rev 2016;35:280-288]. © 2015 Australasian Professional Society on Alcohol and other Drugs.

  15. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  16. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  17. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  18. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  19. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  20. Tackling missing radiographic progression data: multiple imputation technique compared with inverse probability weights and complete case analysis.

    PubMed

    Descalzo, Miguel Á; Garcia, Virginia Villaverde; González-Alvaro, Isidoro; Carbonell, Jordi; Balsa, Alejandro; Sanmartí, Raimon; Lisbona, Pilar; Hernandez-Barrera, Valentín; Jiménez-Garcia, Rodrigo; Carmona, Loreto

    2013-02-01

    To describe the results of different statistical ways of addressing radiographic outcome affected by missing data--multiple imputation technique, inverse probability weights and complete case analysis--using data from an observational study. A random sample of 96 RA patients was selected for a follow-up study in which radiographs of hands and feet were scored. Radiographic progression was tested by comparing the change in the total Sharp-van der Heijde radiographic score (TSS) and the joint erosion score (JES) from baseline to the end of the second year of follow-up. MI technique, inverse probability weights in weighted estimating equation (WEE) and CC analysis were used to fit a negative binomial regression. Major predictors of radiographic progression were JES and joint space narrowing (JSN) at baseline, together with baseline disease activity measured by DAS28 for TSS and MTX use for JES. Results from CC analysis show larger coefficients and s.e.s compared with MI and weighted techniques. The results from the WEE model were quite in line with those of MI. If it seems plausible that CC or MI analysis may be valid, then MI should be preferred because of its greater efficiency. CC analysis resulted in inefficient estimates or, translated into non-statistical terminology, could guide us into inaccurate results and unwise conclusions. The methods discussed here will contribute to the use of alternative approaches for tackling missing data in observational studies.

Top